00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 106 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3284 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.111 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.112 The recommended git tool is: git 00:00:00.112 using credential 00000000-0000-0000-0000-000000000002 00:00:00.113 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.146 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.189 Using shallow fetch with depth 1 00:00:00.189 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.189 > git --version # timeout=10 00:00:00.207 > git --version # 'git version 2.39.2' 00:00:00.207 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.219 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.219 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.572 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.581 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.591 Checking out Revision 1c6ed56008363df82da0fcec030d6d5a1f7bd340 (FETCH_HEAD) 00:00:05.591 > git config core.sparsecheckout # timeout=10 00:00:05.602 > git read-tree -mu HEAD # timeout=10 00:00:05.618 > git checkout -f 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=5 00:00:05.637 Commit message: "spdk-abi-per-patch: pass revision to subbuild" 00:00:05.637 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:05.747 [Pipeline] Start of Pipeline 00:00:05.761 [Pipeline] library 00:00:05.762 Loading library shm_lib@master 00:00:05.763 Library shm_lib@master is cached. Copying from home. 00:00:05.777 [Pipeline] node 00:00:05.787 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.788 [Pipeline] { 00:00:05.798 [Pipeline] catchError 00:00:05.799 [Pipeline] { 00:00:05.809 [Pipeline] wrap 00:00:05.817 [Pipeline] { 00:00:05.869 [Pipeline] stage 00:00:05.871 [Pipeline] { (Prologue) 00:00:05.888 [Pipeline] echo 00:00:05.889 Node: VM-host-WFP1 00:00:05.894 [Pipeline] cleanWs 00:00:05.904 [WS-CLEANUP] Deleting project workspace... 00:00:05.904 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.910 [WS-CLEANUP] done 00:00:06.082 [Pipeline] setCustomBuildProperty 00:00:06.168 [Pipeline] httpRequest 00:00:06.190 [Pipeline] echo 00:00:06.192 Sorcerer 10.211.164.101 is alive 00:00:06.199 [Pipeline] httpRequest 00:00:06.204 HttpMethod: GET 00:00:06.204 URL: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:06.204 Sending request to url: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:06.227 Response Code: HTTP/1.1 200 OK 00:00:06.227 Success: Status code 200 is in the accepted range: 200,404 00:00:06.228 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:28.399 [Pipeline] sh 00:00:28.683 + tar --no-same-owner -xf jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:28.697 [Pipeline] httpRequest 00:00:28.714 [Pipeline] echo 00:00:28.716 Sorcerer 10.211.164.101 is alive 00:00:28.724 [Pipeline] httpRequest 00:00:28.729 HttpMethod: GET 00:00:28.729 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:28.730 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:28.735 Response Code: HTTP/1.1 200 OK 00:00:28.736 Success: Status code 200 is in the accepted range: 200,404 00:00:28.736 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:21.612 [Pipeline] sh 00:01:21.892 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:24.439 [Pipeline] sh 00:01:24.720 + git -C spdk log --oneline -n5 00:01:24.720 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:24.720 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:24.720 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:24.720 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:01:24.720 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:01:24.739 [Pipeline] withCredentials 00:01:24.749 > git --version # timeout=10 00:01:24.760 > git --version # 'git version 2.39.2' 00:01:24.777 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:24.779 [Pipeline] { 00:01:24.807 [Pipeline] retry 00:01:24.809 [Pipeline] { 00:01:24.826 [Pipeline] sh 00:01:25.109 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:26.057 [Pipeline] } 00:01:26.080 [Pipeline] // retry 00:01:26.085 [Pipeline] } 00:01:26.105 [Pipeline] // withCredentials 00:01:26.115 [Pipeline] httpRequest 00:01:26.141 [Pipeline] echo 00:01:26.143 Sorcerer 10.211.164.101 is alive 00:01:26.152 [Pipeline] httpRequest 00:01:26.156 HttpMethod: GET 00:01:26.156 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:26.157 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:26.170 Response Code: HTTP/1.1 200 OK 00:01:26.171 Success: Status code 200 is in the accepted range: 200,404 00:01:26.171 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:34.924 [Pipeline] sh 00:01:35.204 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:36.620 [Pipeline] sh 00:01:36.897 + git -C dpdk log --oneline -n5 00:01:36.897 caf0f5d395 version: 22.11.4 00:01:36.897 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:36.897 dc9c799c7d vhost: fix missing spinlock unlock 00:01:36.897 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:36.897 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:36.914 [Pipeline] writeFile 00:01:36.930 [Pipeline] sh 00:01:37.209 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:37.219 [Pipeline] sh 00:01:37.497 + cat autorun-spdk.conf 00:01:37.497 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.497 SPDK_TEST_NVME=1 00:01:37.497 SPDK_TEST_FTL=1 00:01:37.497 SPDK_TEST_ISAL=1 00:01:37.497 SPDK_RUN_ASAN=1 00:01:37.497 SPDK_RUN_UBSAN=1 00:01:37.497 SPDK_TEST_XNVME=1 00:01:37.497 SPDK_TEST_NVME_FDP=1 00:01:37.497 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:37.497 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:37.497 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.504 RUN_NIGHTLY=1 00:01:37.505 [Pipeline] } 00:01:37.521 [Pipeline] // stage 00:01:37.536 [Pipeline] stage 00:01:37.538 [Pipeline] { (Run VM) 00:01:37.551 [Pipeline] sh 00:01:37.832 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:37.832 + echo 'Start stage prepare_nvme.sh' 00:01:37.832 Start stage prepare_nvme.sh 00:01:37.832 + [[ -n 6 ]] 00:01:37.832 + disk_prefix=ex6 00:01:37.832 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:37.832 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:37.832 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:37.832 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.832 ++ SPDK_TEST_NVME=1 00:01:37.832 ++ SPDK_TEST_FTL=1 00:01:37.832 ++ SPDK_TEST_ISAL=1 00:01:37.832 ++ SPDK_RUN_ASAN=1 00:01:37.832 ++ SPDK_RUN_UBSAN=1 00:01:37.832 ++ SPDK_TEST_XNVME=1 00:01:37.832 ++ SPDK_TEST_NVME_FDP=1 00:01:37.832 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:37.832 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:37.832 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.832 ++ RUN_NIGHTLY=1 00:01:37.832 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:37.832 + nvme_files=() 00:01:37.832 + declare -A nvme_files 00:01:37.832 + backend_dir=/var/lib/libvirt/images/backends 00:01:37.832 + nvme_files['nvme.img']=5G 00:01:37.832 + nvme_files['nvme-cmb.img']=5G 00:01:37.832 + nvme_files['nvme-multi0.img']=4G 00:01:37.832 + nvme_files['nvme-multi1.img']=4G 00:01:37.832 + nvme_files['nvme-multi2.img']=4G 00:01:37.832 + nvme_files['nvme-openstack.img']=8G 00:01:37.832 + nvme_files['nvme-zns.img']=5G 00:01:37.832 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:37.832 + (( SPDK_TEST_FTL == 1 )) 00:01:37.832 + nvme_files["nvme-ftl.img"]=6G 00:01:37.832 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:37.832 + nvme_files["nvme-fdp.img"]=1G 00:01:37.832 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:37.832 + for nvme in "${!nvme_files[@]}" 00:01:37.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:37.832 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.832 + for nvme in "${!nvme_files[@]}" 00:01:37.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:01:37.832 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:37.832 + for nvme in "${!nvme_files[@]}" 00:01:37.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:37.832 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.832 + for nvme in "${!nvme_files[@]}" 00:01:37.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:37.832 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:37.832 + for nvme in "${!nvme_files[@]}" 00:01:37.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:37.832 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.832 + for nvme in "${!nvme_files[@]}" 00:01:37.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:38.091 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:38.091 + for nvme in "${!nvme_files[@]}" 00:01:38.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:38.091 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:38.091 + for nvme in "${!nvme_files[@]}" 00:01:38.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:01:38.091 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:38.091 + for nvme in "${!nvme_files[@]}" 00:01:38.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:38.091 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:38.091 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:38.091 + echo 'End stage prepare_nvme.sh' 00:01:38.091 End stage prepare_nvme.sh 00:01:38.104 [Pipeline] sh 00:01:38.391 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:38.391 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:38.391 00:01:38.391 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:38.391 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:38.391 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:38.391 HELP=0 00:01:38.391 DRY_RUN=0 00:01:38.391 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:01:38.391 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:38.391 NVME_AUTO_CREATE=0 00:01:38.391 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:01:38.391 NVME_CMB=,,,, 00:01:38.391 NVME_PMR=,,,, 00:01:38.391 NVME_ZNS=,,,, 00:01:38.391 NVME_MS=true,,,, 00:01:38.391 NVME_FDP=,,,on, 00:01:38.391 SPDK_VAGRANT_DISTRO=fedora38 00:01:38.391 SPDK_VAGRANT_VMCPU=10 00:01:38.391 SPDK_VAGRANT_VMRAM=12288 00:01:38.391 SPDK_VAGRANT_PROVIDER=libvirt 00:01:38.391 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:38.391 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:38.391 SPDK_OPENSTACK_NETWORK=0 00:01:38.392 VAGRANT_PACKAGE_BOX=0 00:01:38.392 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:38.392 FORCE_DISTRO=true 00:01:38.392 VAGRANT_BOX_VERSION= 00:01:38.392 EXTRA_VAGRANTFILES= 00:01:38.392 NIC_MODEL=e1000 00:01:38.392 00:01:38.392 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:38.392 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:40.919 Bringing machine 'default' up with 'libvirt' provider... 00:01:41.855 ==> default: Creating image (snapshot of base box volume). 00:01:42.115 ==> default: Creating domain with the following settings... 00:01:42.115 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721489835_12b3e1f236a74b97867b 00:01:42.115 ==> default: -- Domain type: kvm 00:01:42.115 ==> default: -- Cpus: 10 00:01:42.115 ==> default: -- Feature: acpi 00:01:42.115 ==> default: -- Feature: apic 00:01:42.115 ==> default: -- Feature: pae 00:01:42.115 ==> default: -- Memory: 12288M 00:01:42.115 ==> default: -- Memory Backing: hugepages: 00:01:42.115 ==> default: -- Management MAC: 00:01:42.115 ==> default: -- Loader: 00:01:42.115 ==> default: -- Nvram: 00:01:42.115 ==> default: -- Base box: spdk/fedora38 00:01:42.115 ==> default: -- Storage pool: default 00:01:42.115 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721489835_12b3e1f236a74b97867b.img (20G) 00:01:42.115 ==> default: -- Volume Cache: default 00:01:42.115 ==> default: -- Kernel: 00:01:42.115 ==> default: -- Initrd: 00:01:42.115 ==> default: -- Graphics Type: vnc 00:01:42.115 ==> default: -- Graphics Port: -1 00:01:42.115 ==> default: -- Graphics IP: 127.0.0.1 00:01:42.115 ==> default: -- Graphics Password: Not defined 00:01:42.115 ==> default: -- Video Type: cirrus 00:01:42.115 ==> default: -- Video VRAM: 9216 00:01:42.115 ==> default: -- Sound Type: 00:01:42.115 ==> default: -- Keymap: en-us 00:01:42.115 ==> default: -- TPM Path: 00:01:42.115 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:42.115 ==> default: -- Command line args: 00:01:42.115 ==> default: -> value=-device, 00:01:42.115 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:42.115 ==> default: -> value=-drive, 00:01:42.115 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:42.115 ==> default: -> value=-device, 00:01:42.115 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:42.115 ==> default: -> value=-device, 00:01:42.115 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:42.115 ==> default: -> value=-drive, 00:01:42.115 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:42.115 ==> default: -> value=-device, 00:01:42.115 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.115 ==> default: -> value=-device, 00:01:42.115 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:42.115 ==> default: -> value=-drive, 00:01:42.115 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:42.115 ==> default: -> value=-device, 00:01:42.115 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.115 ==> default: -> value=-drive, 00:01:42.115 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:42.115 ==> default: -> value=-device, 00:01:42.115 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.115 ==> default: -> value=-drive, 00:01:42.115 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:42.115 ==> default: -> value=-device, 00:01:42.115 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.115 ==> default: -> value=-device, 00:01:42.115 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:42.115 ==> default: -> value=-device, 00:01:42.115 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:42.115 ==> default: -> value=-drive, 00:01:42.115 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:42.115 ==> default: -> value=-device, 00:01:42.115 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.374 ==> default: Creating shared folders metadata... 00:01:42.374 ==> default: Starting domain. 00:01:44.280 ==> default: Waiting for domain to get an IP address... 00:02:02.365 ==> default: Waiting for SSH to become available... 00:02:02.365 ==> default: Configuring and enabling network interfaces... 00:02:06.550 default: SSH address: 192.168.121.141:22 00:02:06.550 default: SSH username: vagrant 00:02:06.550 default: SSH auth method: private key 00:02:09.835 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:17.949 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:24.516 ==> default: Mounting SSHFS shared folder... 00:02:26.435 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:26.435 ==> default: Checking Mount.. 00:02:27.839 ==> default: Folder Successfully Mounted! 00:02:27.839 ==> default: Running provisioner: file... 00:02:28.814 default: ~/.gitconfig => .gitconfig 00:02:29.382 00:02:29.382 SUCCESS! 00:02:29.382 00:02:29.382 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:29.382 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:29.382 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:29.382 00:02:29.390 [Pipeline] } 00:02:29.405 [Pipeline] // stage 00:02:29.413 [Pipeline] dir 00:02:29.413 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:02:29.415 [Pipeline] { 00:02:29.428 [Pipeline] catchError 00:02:29.429 [Pipeline] { 00:02:29.442 [Pipeline] sh 00:02:29.721 + vagrant ssh-config --host vagrant 00:02:29.721 + sed -ne /^Host/,$p 00:02:29.721 + tee ssh_conf 00:02:32.253 Host vagrant 00:02:32.253 HostName 192.168.121.141 00:02:32.253 User vagrant 00:02:32.253 Port 22 00:02:32.253 UserKnownHostsFile /dev/null 00:02:32.253 StrictHostKeyChecking no 00:02:32.253 PasswordAuthentication no 00:02:32.253 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:32.253 IdentitiesOnly yes 00:02:32.253 LogLevel FATAL 00:02:32.253 ForwardAgent yes 00:02:32.253 ForwardX11 yes 00:02:32.253 00:02:32.266 [Pipeline] withEnv 00:02:32.269 [Pipeline] { 00:02:32.284 [Pipeline] sh 00:02:32.564 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:32.564 source /etc/os-release 00:02:32.564 [[ -e /image.version ]] && img=$(< /image.version) 00:02:32.564 # Minimal, systemd-like check. 00:02:32.564 if [[ -e /.dockerenv ]]; then 00:02:32.564 # Clear garbage from the node's name: 00:02:32.564 # agt-er_autotest_547-896 -> autotest_547-896 00:02:32.564 # $HOSTNAME is the actual container id 00:02:32.564 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:32.564 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:32.564 # We can assume this is a mount from a host where container is running, 00:02:32.564 # so fetch its hostname to easily identify the target swarm worker. 00:02:32.564 container="$(< /etc/hostname) ($agent)" 00:02:32.564 else 00:02:32.564 # Fallback 00:02:32.564 container=$agent 00:02:32.564 fi 00:02:32.564 fi 00:02:32.564 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:32.564 00:02:32.835 [Pipeline] } 00:02:32.855 [Pipeline] // withEnv 00:02:32.864 [Pipeline] setCustomBuildProperty 00:02:32.879 [Pipeline] stage 00:02:32.881 [Pipeline] { (Tests) 00:02:32.897 [Pipeline] sh 00:02:33.178 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:33.485 [Pipeline] sh 00:02:33.766 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:34.036 [Pipeline] timeout 00:02:34.036 Timeout set to expire in 40 min 00:02:34.037 [Pipeline] { 00:02:34.052 [Pipeline] sh 00:02:34.333 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:34.898 HEAD is now at 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:02:34.911 [Pipeline] sh 00:02:35.190 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:35.462 [Pipeline] sh 00:02:35.741 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:36.016 [Pipeline] sh 00:02:36.295 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:36.554 ++ readlink -f spdk_repo 00:02:36.554 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:36.554 + [[ -n /home/vagrant/spdk_repo ]] 00:02:36.554 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:36.554 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:36.554 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:36.554 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:36.554 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:36.554 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:36.554 + cd /home/vagrant/spdk_repo 00:02:36.554 + source /etc/os-release 00:02:36.554 ++ NAME='Fedora Linux' 00:02:36.554 ++ VERSION='38 (Cloud Edition)' 00:02:36.554 ++ ID=fedora 00:02:36.554 ++ VERSION_ID=38 00:02:36.554 ++ VERSION_CODENAME= 00:02:36.554 ++ PLATFORM_ID=platform:f38 00:02:36.554 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:36.554 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:36.554 ++ LOGO=fedora-logo-icon 00:02:36.554 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:36.554 ++ HOME_URL=https://fedoraproject.org/ 00:02:36.554 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:36.554 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:36.554 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:36.554 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:36.554 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:36.554 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:36.554 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:36.554 ++ SUPPORT_END=2024-05-14 00:02:36.554 ++ VARIANT='Cloud Edition' 00:02:36.554 ++ VARIANT_ID=cloud 00:02:36.554 + uname -a 00:02:36.554 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:36.554 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:36.813 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:37.380 Hugepages 00:02:37.380 node hugesize free / total 00:02:37.380 node0 1048576kB 0 / 0 00:02:37.380 node0 2048kB 0 / 0 00:02:37.380 00:02:37.380 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:37.380 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:37.380 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:37.380 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:37.380 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:37.380 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:37.380 + rm -f /tmp/spdk-ld-path 00:02:37.380 + source autorun-spdk.conf 00:02:37.380 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:37.380 ++ SPDK_TEST_NVME=1 00:02:37.380 ++ SPDK_TEST_FTL=1 00:02:37.380 ++ SPDK_TEST_ISAL=1 00:02:37.380 ++ SPDK_RUN_ASAN=1 00:02:37.380 ++ SPDK_RUN_UBSAN=1 00:02:37.380 ++ SPDK_TEST_XNVME=1 00:02:37.380 ++ SPDK_TEST_NVME_FDP=1 00:02:37.380 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:37.380 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:37.380 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:37.380 ++ RUN_NIGHTLY=1 00:02:37.380 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:37.380 + [[ -n '' ]] 00:02:37.380 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:37.380 + for M in /var/spdk/build-*-manifest.txt 00:02:37.380 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:37.380 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:37.380 + for M in /var/spdk/build-*-manifest.txt 00:02:37.380 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:37.380 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:37.380 ++ uname 00:02:37.380 + [[ Linux == \L\i\n\u\x ]] 00:02:37.380 + sudo dmesg -T 00:02:37.638 + sudo dmesg --clear 00:02:37.638 + dmesg_pid=5891 00:02:37.638 + sudo dmesg -Tw 00:02:37.638 + [[ Fedora Linux == FreeBSD ]] 00:02:37.638 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:37.638 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:37.638 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:37.638 + [[ -x /usr/src/fio-static/fio ]] 00:02:37.638 + export FIO_BIN=/usr/src/fio-static/fio 00:02:37.638 + FIO_BIN=/usr/src/fio-static/fio 00:02:37.638 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:37.638 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:37.638 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:37.638 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:37.638 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:37.638 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:37.638 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:37.638 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:37.638 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:37.638 Test configuration: 00:02:37.638 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:37.638 SPDK_TEST_NVME=1 00:02:37.638 SPDK_TEST_FTL=1 00:02:37.638 SPDK_TEST_ISAL=1 00:02:37.638 SPDK_RUN_ASAN=1 00:02:37.638 SPDK_RUN_UBSAN=1 00:02:37.638 SPDK_TEST_XNVME=1 00:02:37.638 SPDK_TEST_NVME_FDP=1 00:02:37.638 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:37.638 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:37.638 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:37.638 RUN_NIGHTLY=1 15:38:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:37.638 15:38:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:37.638 15:38:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:37.638 15:38:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:37.638 15:38:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.638 15:38:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.639 15:38:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.639 15:38:12 -- paths/export.sh@5 -- $ export PATH 00:02:37.639 15:38:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.639 15:38:12 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:37.639 15:38:12 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:37.639 15:38:12 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721489892.XXXXXX 00:02:37.639 15:38:12 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721489892.JrPZOk 00:02:37.639 15:38:12 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:37.639 15:38:12 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:02:37.639 15:38:12 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:37.639 15:38:12 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:37.639 15:38:12 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:37.639 15:38:12 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:37.639 15:38:12 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:37.639 15:38:12 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:02:37.639 15:38:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.897 15:38:12 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:37.897 15:38:12 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:37.897 15:38:12 -- pm/common@17 -- $ local monitor 00:02:37.897 15:38:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.897 15:38:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.897 15:38:12 -- pm/common@25 -- $ sleep 1 00:02:37.897 15:38:12 -- pm/common@21 -- $ date +%s 00:02:37.897 15:38:12 -- pm/common@21 -- $ date +%s 00:02:37.897 15:38:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721489892 00:02:37.897 15:38:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721489892 00:02:37.897 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721489892_collect-vmstat.pm.log 00:02:37.897 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721489892_collect-cpu-load.pm.log 00:02:38.840 15:38:13 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:38.840 15:38:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:38.840 15:38:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:38.840 15:38:13 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:38.840 15:38:13 -- spdk/autobuild.sh@16 -- $ date -u 00:02:38.840 Sat Jul 20 03:38:13 PM UTC 2024 00:02:38.840 15:38:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:38.840 v24.05-13-g5fa2f5086 00:02:38.840 15:38:13 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:38.840 15:38:13 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:38.840 15:38:13 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:38.840 15:38:13 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:38.840 15:38:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.840 ************************************ 00:02:38.840 START TEST asan 00:02:38.840 ************************************ 00:02:38.840 using asan 00:02:38.840 15:38:13 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:02:38.840 00:02:38.840 real 0m0.000s 00:02:38.840 user 0m0.000s 00:02:38.840 sys 0m0.000s 00:02:38.840 15:38:13 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:38.840 ************************************ 00:02:38.840 END TEST asan 00:02:38.840 15:38:13 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:38.840 ************************************ 00:02:38.840 15:38:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:38.840 15:38:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:38.840 15:38:13 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:38.840 15:38:13 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:38.840 15:38:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.840 ************************************ 00:02:38.840 START TEST ubsan 00:02:38.840 ************************************ 00:02:38.840 using ubsan 00:02:38.840 15:38:13 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:38.840 00:02:38.840 real 0m0.000s 00:02:38.840 user 0m0.000s 00:02:38.840 sys 0m0.000s 00:02:38.840 15:38:13 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:38.840 ************************************ 00:02:38.840 END TEST ubsan 00:02:38.840 ************************************ 00:02:38.840 15:38:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:38.840 15:38:13 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:38.840 15:38:13 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:38.840 15:38:13 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:38.840 15:38:13 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:38.840 15:38:13 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:38.840 15:38:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.840 ************************************ 00:02:38.840 START TEST build_native_dpdk 00:02:38.840 ************************************ 00:02:38.840 15:38:13 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:38.840 15:38:13 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:39.099 caf0f5d395 version: 22.11.4 00:02:39.099 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:39.099 dc9c799c7d vhost: fix missing spinlock unlock 00:02:39.099 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:39.099 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:39.099 15:38:13 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:39.099 patching file config/rte_config.h 00:02:39.099 Hunk #1 succeeded at 60 (offset 1 line). 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:39.099 15:38:13 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:44.364 The Meson build system 00:02:44.364 Version: 1.3.1 00:02:44.364 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:44.364 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:44.364 Build type: native build 00:02:44.364 Program cat found: YES (/usr/bin/cat) 00:02:44.364 Project name: DPDK 00:02:44.364 Project version: 22.11.4 00:02:44.364 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:44.364 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:44.364 Host machine cpu family: x86_64 00:02:44.364 Host machine cpu: x86_64 00:02:44.364 Message: ## Building in Developer Mode ## 00:02:44.364 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:44.364 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:44.364 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:44.364 Program objdump found: YES (/usr/bin/objdump) 00:02:44.364 Program python3 found: YES (/usr/bin/python3) 00:02:44.364 Program cat found: YES (/usr/bin/cat) 00:02:44.364 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:44.364 Checking for size of "void *" : 8 00:02:44.364 Checking for size of "void *" : 8 (cached) 00:02:44.364 Library m found: YES 00:02:44.364 Library numa found: YES 00:02:44.364 Has header "numaif.h" : YES 00:02:44.364 Library fdt found: NO 00:02:44.364 Library execinfo found: NO 00:02:44.364 Has header "execinfo.h" : YES 00:02:44.364 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:44.364 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:44.364 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:44.364 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:44.364 Run-time dependency openssl found: YES 3.0.9 00:02:44.364 Run-time dependency libpcap found: YES 1.10.4 00:02:44.364 Has header "pcap.h" with dependency libpcap: YES 00:02:44.364 Compiler for C supports arguments -Wcast-qual: YES 00:02:44.364 Compiler for C supports arguments -Wdeprecated: YES 00:02:44.364 Compiler for C supports arguments -Wformat: YES 00:02:44.364 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:44.364 Compiler for C supports arguments -Wformat-security: NO 00:02:44.364 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:44.364 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:44.364 Compiler for C supports arguments -Wnested-externs: YES 00:02:44.364 Compiler for C supports arguments -Wold-style-definition: YES 00:02:44.364 Compiler for C supports arguments -Wpointer-arith: YES 00:02:44.364 Compiler for C supports arguments -Wsign-compare: YES 00:02:44.364 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:44.364 Compiler for C supports arguments -Wundef: YES 00:02:44.364 Compiler for C supports arguments -Wwrite-strings: YES 00:02:44.364 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:44.364 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:44.364 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:44.364 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:44.364 Compiler for C supports arguments -mavx512f: YES 00:02:44.364 Checking if "AVX512 checking" compiles: YES 00:02:44.364 Fetching value of define "__SSE4_2__" : 1 00:02:44.364 Fetching value of define "__AES__" : 1 00:02:44.364 Fetching value of define "__AVX__" : 1 00:02:44.365 Fetching value of define "__AVX2__" : 1 00:02:44.365 Fetching value of define "__AVX512BW__" : 1 00:02:44.365 Fetching value of define "__AVX512CD__" : 1 00:02:44.365 Fetching value of define "__AVX512DQ__" : 1 00:02:44.365 Fetching value of define "__AVX512F__" : 1 00:02:44.365 Fetching value of define "__AVX512VL__" : 1 00:02:44.365 Fetching value of define "__PCLMUL__" : 1 00:02:44.365 Fetching value of define "__RDRND__" : 1 00:02:44.365 Fetching value of define "__RDSEED__" : 1 00:02:44.365 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:44.365 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:44.365 Message: lib/kvargs: Defining dependency "kvargs" 00:02:44.365 Message: lib/telemetry: Defining dependency "telemetry" 00:02:44.365 Checking for function "getentropy" : YES 00:02:44.365 Message: lib/eal: Defining dependency "eal" 00:02:44.365 Message: lib/ring: Defining dependency "ring" 00:02:44.365 Message: lib/rcu: Defining dependency "rcu" 00:02:44.365 Message: lib/mempool: Defining dependency "mempool" 00:02:44.365 Message: lib/mbuf: Defining dependency "mbuf" 00:02:44.365 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:44.365 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:44.365 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:44.365 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:44.365 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:44.365 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:44.365 Compiler for C supports arguments -mpclmul: YES 00:02:44.365 Compiler for C supports arguments -maes: YES 00:02:44.365 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:44.365 Compiler for C supports arguments -mavx512bw: YES 00:02:44.365 Compiler for C supports arguments -mavx512dq: YES 00:02:44.365 Compiler for C supports arguments -mavx512vl: YES 00:02:44.365 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:44.365 Compiler for C supports arguments -mavx2: YES 00:02:44.365 Compiler for C supports arguments -mavx: YES 00:02:44.365 Message: lib/net: Defining dependency "net" 00:02:44.365 Message: lib/meter: Defining dependency "meter" 00:02:44.365 Message: lib/ethdev: Defining dependency "ethdev" 00:02:44.365 Message: lib/pci: Defining dependency "pci" 00:02:44.365 Message: lib/cmdline: Defining dependency "cmdline" 00:02:44.365 Message: lib/metrics: Defining dependency "metrics" 00:02:44.365 Message: lib/hash: Defining dependency "hash" 00:02:44.365 Message: lib/timer: Defining dependency "timer" 00:02:44.365 Fetching value of define "__AVX2__" : 1 (cached) 00:02:44.365 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:44.365 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:44.365 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:44.365 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:44.365 Message: lib/acl: Defining dependency "acl" 00:02:44.365 Message: lib/bbdev: Defining dependency "bbdev" 00:02:44.365 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:44.365 Run-time dependency libelf found: YES 0.190 00:02:44.365 Message: lib/bpf: Defining dependency "bpf" 00:02:44.365 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:44.365 Message: lib/compressdev: Defining dependency "compressdev" 00:02:44.365 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:44.365 Message: lib/distributor: Defining dependency "distributor" 00:02:44.365 Message: lib/efd: Defining dependency "efd" 00:02:44.365 Message: lib/eventdev: Defining dependency "eventdev" 00:02:44.365 Message: lib/gpudev: Defining dependency "gpudev" 00:02:44.365 Message: lib/gro: Defining dependency "gro" 00:02:44.365 Message: lib/gso: Defining dependency "gso" 00:02:44.365 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:44.365 Message: lib/jobstats: Defining dependency "jobstats" 00:02:44.365 Message: lib/latencystats: Defining dependency "latencystats" 00:02:44.365 Message: lib/lpm: Defining dependency "lpm" 00:02:44.365 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:44.365 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:44.365 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:44.365 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:44.365 Message: lib/member: Defining dependency "member" 00:02:44.365 Message: lib/pcapng: Defining dependency "pcapng" 00:02:44.365 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:44.365 Message: lib/power: Defining dependency "power" 00:02:44.365 Message: lib/rawdev: Defining dependency "rawdev" 00:02:44.365 Message: lib/regexdev: Defining dependency "regexdev" 00:02:44.365 Message: lib/dmadev: Defining dependency "dmadev" 00:02:44.365 Message: lib/rib: Defining dependency "rib" 00:02:44.365 Message: lib/reorder: Defining dependency "reorder" 00:02:44.365 Message: lib/sched: Defining dependency "sched" 00:02:44.365 Message: lib/security: Defining dependency "security" 00:02:44.365 Message: lib/stack: Defining dependency "stack" 00:02:44.365 Has header "linux/userfaultfd.h" : YES 00:02:44.365 Message: lib/vhost: Defining dependency "vhost" 00:02:44.365 Message: lib/ipsec: Defining dependency "ipsec" 00:02:44.365 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:44.365 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:44.365 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:44.365 Message: lib/fib: Defining dependency "fib" 00:02:44.365 Message: lib/port: Defining dependency "port" 00:02:44.365 Message: lib/pdump: Defining dependency "pdump" 00:02:44.365 Message: lib/table: Defining dependency "table" 00:02:44.365 Message: lib/pipeline: Defining dependency "pipeline" 00:02:44.365 Message: lib/graph: Defining dependency "graph" 00:02:44.365 Message: lib/node: Defining dependency "node" 00:02:44.365 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:44.365 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:44.365 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:44.365 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:44.365 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:44.365 Compiler for C supports arguments -Wno-unused-value: YES 00:02:44.365 Compiler for C supports arguments -Wno-format: YES 00:02:44.365 Compiler for C supports arguments -Wno-format-security: YES 00:02:44.365 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:44.365 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:45.367 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:45.367 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:45.367 Fetching value of define "__AVX2__" : 1 (cached) 00:02:45.367 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:45.367 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:45.367 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:45.367 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:45.367 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:45.367 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:45.367 Program doxygen found: YES (/usr/bin/doxygen) 00:02:45.367 Configuring doxy-api.conf using configuration 00:02:45.367 Program sphinx-build found: NO 00:02:45.367 Configuring rte_build_config.h using configuration 00:02:45.367 Message: 00:02:45.367 ================= 00:02:45.367 Applications Enabled 00:02:45.367 ================= 00:02:45.367 00:02:45.367 apps: 00:02:45.367 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:45.367 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:45.367 test-security-perf, 00:02:45.367 00:02:45.367 Message: 00:02:45.367 ================= 00:02:45.367 Libraries Enabled 00:02:45.367 ================= 00:02:45.367 00:02:45.367 libs: 00:02:45.367 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:45.367 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:45.367 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:45.367 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:45.367 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:45.367 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:45.367 table, pipeline, graph, node, 00:02:45.367 00:02:45.367 Message: 00:02:45.367 =============== 00:02:45.367 Drivers Enabled 00:02:45.367 =============== 00:02:45.367 00:02:45.367 common: 00:02:45.367 00:02:45.367 bus: 00:02:45.367 pci, vdev, 00:02:45.367 mempool: 00:02:45.367 ring, 00:02:45.367 dma: 00:02:45.367 00:02:45.367 net: 00:02:45.367 i40e, 00:02:45.367 raw: 00:02:45.367 00:02:45.367 crypto: 00:02:45.367 00:02:45.367 compress: 00:02:45.367 00:02:45.367 regex: 00:02:45.367 00:02:45.367 vdpa: 00:02:45.367 00:02:45.367 event: 00:02:45.367 00:02:45.367 baseband: 00:02:45.367 00:02:45.367 gpu: 00:02:45.367 00:02:45.367 00:02:45.367 Message: 00:02:45.367 ================= 00:02:45.367 Content Skipped 00:02:45.367 ================= 00:02:45.367 00:02:45.367 apps: 00:02:45.367 00:02:45.367 libs: 00:02:45.367 kni: explicitly disabled via build config (deprecated lib) 00:02:45.367 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:45.367 00:02:45.367 drivers: 00:02:45.367 common/cpt: not in enabled drivers build config 00:02:45.367 common/dpaax: not in enabled drivers build config 00:02:45.367 common/iavf: not in enabled drivers build config 00:02:45.367 common/idpf: not in enabled drivers build config 00:02:45.367 common/mvep: not in enabled drivers build config 00:02:45.367 common/octeontx: not in enabled drivers build config 00:02:45.367 bus/auxiliary: not in enabled drivers build config 00:02:45.367 bus/dpaa: not in enabled drivers build config 00:02:45.367 bus/fslmc: not in enabled drivers build config 00:02:45.367 bus/ifpga: not in enabled drivers build config 00:02:45.367 bus/vmbus: not in enabled drivers build config 00:02:45.367 common/cnxk: not in enabled drivers build config 00:02:45.367 common/mlx5: not in enabled drivers build config 00:02:45.367 common/qat: not in enabled drivers build config 00:02:45.367 common/sfc_efx: not in enabled drivers build config 00:02:45.367 mempool/bucket: not in enabled drivers build config 00:02:45.367 mempool/cnxk: not in enabled drivers build config 00:02:45.367 mempool/dpaa: not in enabled drivers build config 00:02:45.367 mempool/dpaa2: not in enabled drivers build config 00:02:45.367 mempool/octeontx: not in enabled drivers build config 00:02:45.367 mempool/stack: not in enabled drivers build config 00:02:45.367 dma/cnxk: not in enabled drivers build config 00:02:45.367 dma/dpaa: not in enabled drivers build config 00:02:45.367 dma/dpaa2: not in enabled drivers build config 00:02:45.367 dma/hisilicon: not in enabled drivers build config 00:02:45.367 dma/idxd: not in enabled drivers build config 00:02:45.367 dma/ioat: not in enabled drivers build config 00:02:45.367 dma/skeleton: not in enabled drivers build config 00:02:45.367 net/af_packet: not in enabled drivers build config 00:02:45.367 net/af_xdp: not in enabled drivers build config 00:02:45.367 net/ark: not in enabled drivers build config 00:02:45.367 net/atlantic: not in enabled drivers build config 00:02:45.367 net/avp: not in enabled drivers build config 00:02:45.367 net/axgbe: not in enabled drivers build config 00:02:45.367 net/bnx2x: not in enabled drivers build config 00:02:45.367 net/bnxt: not in enabled drivers build config 00:02:45.367 net/bonding: not in enabled drivers build config 00:02:45.367 net/cnxk: not in enabled drivers build config 00:02:45.367 net/cxgbe: not in enabled drivers build config 00:02:45.367 net/dpaa: not in enabled drivers build config 00:02:45.367 net/dpaa2: not in enabled drivers build config 00:02:45.367 net/e1000: not in enabled drivers build config 00:02:45.367 net/ena: not in enabled drivers build config 00:02:45.367 net/enetc: not in enabled drivers build config 00:02:45.367 net/enetfec: not in enabled drivers build config 00:02:45.367 net/enic: not in enabled drivers build config 00:02:45.367 net/failsafe: not in enabled drivers build config 00:02:45.367 net/fm10k: not in enabled drivers build config 00:02:45.367 net/gve: not in enabled drivers build config 00:02:45.367 net/hinic: not in enabled drivers build config 00:02:45.367 net/hns3: not in enabled drivers build config 00:02:45.367 net/iavf: not in enabled drivers build config 00:02:45.367 net/ice: not in enabled drivers build config 00:02:45.367 net/idpf: not in enabled drivers build config 00:02:45.367 net/igc: not in enabled drivers build config 00:02:45.367 net/ionic: not in enabled drivers build config 00:02:45.367 net/ipn3ke: not in enabled drivers build config 00:02:45.367 net/ixgbe: not in enabled drivers build config 00:02:45.367 net/kni: not in enabled drivers build config 00:02:45.367 net/liquidio: not in enabled drivers build config 00:02:45.367 net/mana: not in enabled drivers build config 00:02:45.367 net/memif: not in enabled drivers build config 00:02:45.367 net/mlx4: not in enabled drivers build config 00:02:45.367 net/mlx5: not in enabled drivers build config 00:02:45.367 net/mvneta: not in enabled drivers build config 00:02:45.367 net/mvpp2: not in enabled drivers build config 00:02:45.367 net/netvsc: not in enabled drivers build config 00:02:45.367 net/nfb: not in enabled drivers build config 00:02:45.367 net/nfp: not in enabled drivers build config 00:02:45.367 net/ngbe: not in enabled drivers build config 00:02:45.367 net/null: not in enabled drivers build config 00:02:45.367 net/octeontx: not in enabled drivers build config 00:02:45.367 net/octeon_ep: not in enabled drivers build config 00:02:45.367 net/pcap: not in enabled drivers build config 00:02:45.367 net/pfe: not in enabled drivers build config 00:02:45.367 net/qede: not in enabled drivers build config 00:02:45.367 net/ring: not in enabled drivers build config 00:02:45.367 net/sfc: not in enabled drivers build config 00:02:45.367 net/softnic: not in enabled drivers build config 00:02:45.367 net/tap: not in enabled drivers build config 00:02:45.367 net/thunderx: not in enabled drivers build config 00:02:45.367 net/txgbe: not in enabled drivers build config 00:02:45.367 net/vdev_netvsc: not in enabled drivers build config 00:02:45.367 net/vhost: not in enabled drivers build config 00:02:45.367 net/virtio: not in enabled drivers build config 00:02:45.367 net/vmxnet3: not in enabled drivers build config 00:02:45.367 raw/cnxk_bphy: not in enabled drivers build config 00:02:45.367 raw/cnxk_gpio: not in enabled drivers build config 00:02:45.367 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:45.367 raw/ifpga: not in enabled drivers build config 00:02:45.367 raw/ntb: not in enabled drivers build config 00:02:45.367 raw/skeleton: not in enabled drivers build config 00:02:45.367 crypto/armv8: not in enabled drivers build config 00:02:45.367 crypto/bcmfs: not in enabled drivers build config 00:02:45.367 crypto/caam_jr: not in enabled drivers build config 00:02:45.367 crypto/ccp: not in enabled drivers build config 00:02:45.367 crypto/cnxk: not in enabled drivers build config 00:02:45.367 crypto/dpaa_sec: not in enabled drivers build config 00:02:45.367 crypto/dpaa2_sec: not in enabled drivers build config 00:02:45.367 crypto/ipsec_mb: not in enabled drivers build config 00:02:45.367 crypto/mlx5: not in enabled drivers build config 00:02:45.367 crypto/mvsam: not in enabled drivers build config 00:02:45.367 crypto/nitrox: not in enabled drivers build config 00:02:45.367 crypto/null: not in enabled drivers build config 00:02:45.367 crypto/octeontx: not in enabled drivers build config 00:02:45.367 crypto/openssl: not in enabled drivers build config 00:02:45.367 crypto/scheduler: not in enabled drivers build config 00:02:45.367 crypto/uadk: not in enabled drivers build config 00:02:45.367 crypto/virtio: not in enabled drivers build config 00:02:45.367 compress/isal: not in enabled drivers build config 00:02:45.367 compress/mlx5: not in enabled drivers build config 00:02:45.367 compress/octeontx: not in enabled drivers build config 00:02:45.367 compress/zlib: not in enabled drivers build config 00:02:45.367 regex/mlx5: not in enabled drivers build config 00:02:45.367 regex/cn9k: not in enabled drivers build config 00:02:45.367 vdpa/ifc: not in enabled drivers build config 00:02:45.367 vdpa/mlx5: not in enabled drivers build config 00:02:45.367 vdpa/sfc: not in enabled drivers build config 00:02:45.367 event/cnxk: not in enabled drivers build config 00:02:45.367 event/dlb2: not in enabled drivers build config 00:02:45.367 event/dpaa: not in enabled drivers build config 00:02:45.367 event/dpaa2: not in enabled drivers build config 00:02:45.367 event/dsw: not in enabled drivers build config 00:02:45.367 event/opdl: not in enabled drivers build config 00:02:45.367 event/skeleton: not in enabled drivers build config 00:02:45.367 event/sw: not in enabled drivers build config 00:02:45.367 event/octeontx: not in enabled drivers build config 00:02:45.367 baseband/acc: not in enabled drivers build config 00:02:45.367 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:45.367 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:45.367 baseband/la12xx: not in enabled drivers build config 00:02:45.367 baseband/null: not in enabled drivers build config 00:02:45.368 baseband/turbo_sw: not in enabled drivers build config 00:02:45.368 gpu/cuda: not in enabled drivers build config 00:02:45.368 00:02:45.368 00:02:45.368 Build targets in project: 311 00:02:45.368 00:02:45.368 DPDK 22.11.4 00:02:45.368 00:02:45.368 User defined options 00:02:45.368 libdir : lib 00:02:45.368 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:45.368 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:45.368 c_link_args : 00:02:45.368 enable_docs : false 00:02:45.368 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:45.368 enable_kmods : false 00:02:45.368 machine : native 00:02:45.368 tests : false 00:02:45.368 00:02:45.368 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:45.368 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:45.626 15:38:20 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:45.626 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:45.626 [1/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:45.626 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:45.626 [3/740] Generating lib/rte_kvargs_def with a custom command 00:02:45.626 [4/740] Generating lib/rte_telemetry_def with a custom command 00:02:45.626 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:45.626 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:45.626 [7/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:45.626 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:45.626 [9/740] Linking static target lib/librte_kvargs.a 00:02:45.626 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:45.626 [11/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:45.626 [12/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:45.626 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:45.884 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:45.884 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:45.884 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:45.884 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:45.884 [18/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.884 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:45.884 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:45.884 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:45.884 [22/740] Linking target lib/librte_kvargs.so.23.0 00:02:45.884 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:45.884 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:45.884 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:45.884 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:46.142 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:46.142 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:46.142 [29/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:46.142 [30/740] Linking static target lib/librte_telemetry.a 00:02:46.142 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:46.142 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:46.142 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:46.142 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:46.142 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:46.142 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:46.142 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:46.142 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:46.142 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:46.142 [40/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:46.142 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:46.399 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:46.399 [43/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.399 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:46.399 [45/740] Linking target lib/librte_telemetry.so.23.0 00:02:46.399 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:46.399 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:46.399 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:46.399 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:46.399 [50/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:46.399 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:46.399 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:46.399 [53/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:46.399 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:46.656 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:46.656 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:46.656 [57/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:46.656 [58/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:46.656 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:46.656 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:46.656 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:46.657 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:46.657 [63/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:46.657 [64/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:46.657 [65/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:46.657 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:46.657 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:46.657 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:46.657 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:46.657 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:46.657 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:46.914 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:46.914 [73/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:46.914 [74/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:46.914 [75/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:46.914 [76/740] Generating lib/rte_eal_def with a custom command 00:02:46.914 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:46.914 [78/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:46.914 [79/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:46.914 [80/740] Generating lib/rte_eal_mingw with a custom command 00:02:46.914 [81/740] Generating lib/rte_ring_def with a custom command 00:02:46.914 [82/740] Generating lib/rte_ring_mingw with a custom command 00:02:46.914 [83/740] Generating lib/rte_rcu_def with a custom command 00:02:46.914 [84/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:46.914 [85/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:46.914 [86/740] Generating lib/rte_rcu_mingw with a custom command 00:02:46.914 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:46.914 [88/740] Linking static target lib/librte_ring.a 00:02:46.914 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:46.914 [90/740] Generating lib/rte_mempool_def with a custom command 00:02:47.171 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:02:47.171 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:47.171 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:47.171 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.171 [95/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:47.171 [96/740] Generating lib/rte_mbuf_def with a custom command 00:02:47.171 [97/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:47.171 [98/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:47.171 [99/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:47.171 [100/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:47.428 [101/740] Linking static target lib/librte_eal.a 00:02:47.428 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:47.428 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:47.428 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:47.428 [105/740] Linking static target lib/librte_rcu.a 00:02:47.686 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:47.686 [107/740] Linking static target lib/librte_mempool.a 00:02:47.686 [108/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:47.686 [109/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:47.686 [110/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:47.686 [111/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:47.686 [112/740] Generating lib/rte_net_def with a custom command 00:02:47.686 [113/740] Generating lib/rte_net_mingw with a custom command 00:02:47.686 [114/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:47.686 [115/740] Generating lib/rte_meter_def with a custom command 00:02:47.686 [116/740] Generating lib/rte_meter_mingw with a custom command 00:02:47.686 [117/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.686 [118/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:47.945 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:47.945 [120/740] Linking static target lib/librte_meter.a 00:02:47.945 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:47.945 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:47.945 [123/740] Linking static target lib/librte_net.a 00:02:48.203 [124/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:48.203 [125/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.203 [126/740] Linking static target lib/librte_mbuf.a 00:02:48.203 [127/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:48.203 [128/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:48.203 [129/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:48.203 [130/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.203 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:48.203 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:48.460 [133/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.460 [134/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.460 [135/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:48.718 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:48.718 [137/740] Generating lib/rte_ethdev_def with a custom command 00:02:48.718 [138/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:48.718 [139/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:48.718 [140/740] Generating lib/rte_pci_def with a custom command 00:02:48.718 [141/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:48.718 [142/740] Generating lib/rte_pci_mingw with a custom command 00:02:48.718 [143/740] Linking static target lib/librte_pci.a 00:02:48.718 [144/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:48.718 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:48.718 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:48.975 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:48.975 [148/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:48.975 [149/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.975 [150/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:48.975 [151/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:48.975 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:48.975 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:48.975 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:48.975 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:48.975 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:48.975 [157/740] Generating lib/rte_cmdline_def with a custom command 00:02:49.232 [158/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:49.232 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:49.232 [160/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:49.232 [161/740] Generating lib/rte_metrics_def with a custom command 00:02:49.232 [162/740] Generating lib/rte_metrics_mingw with a custom command 00:02:49.232 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:49.232 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:49.232 [165/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:49.232 [166/740] Generating lib/rte_hash_def with a custom command 00:02:49.232 [167/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:49.232 [168/740] Generating lib/rte_hash_mingw with a custom command 00:02:49.232 [169/740] Linking static target lib/librte_cmdline.a 00:02:49.232 [170/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:49.232 [171/740] Generating lib/rte_timer_def with a custom command 00:02:49.232 [172/740] Generating lib/rte_timer_mingw with a custom command 00:02:49.232 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:49.489 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:49.489 [175/740] Linking static target lib/librte_metrics.a 00:02:49.489 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:49.489 [177/740] Linking static target lib/librte_timer.a 00:02:49.747 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.747 [179/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:49.747 [180/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:50.005 [181/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:50.005 [182/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.005 [183/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.005 [184/740] Generating lib/rte_acl_def with a custom command 00:02:50.005 [185/740] Generating lib/rte_acl_mingw with a custom command 00:02:50.005 [186/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:50.262 [187/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:50.262 [188/740] Generating lib/rte_bbdev_def with a custom command 00:02:50.262 [189/740] Linking static target lib/librte_ethdev.a 00:02:50.262 [190/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:50.262 [191/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:50.262 [192/740] Generating lib/rte_bitratestats_def with a custom command 00:02:50.262 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:50.519 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:50.519 [195/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:50.519 [196/740] Linking static target lib/librte_bitratestats.a 00:02:50.519 [197/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:50.777 [198/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.777 [199/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:50.777 [200/740] Linking static target lib/librte_bbdev.a 00:02:51.033 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:51.033 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:51.033 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:51.290 [204/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.290 [205/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:51.290 [206/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:51.290 [207/740] Linking static target lib/librte_hash.a 00:02:51.547 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:51.547 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:51.547 [210/740] Generating lib/rte_bpf_def with a custom command 00:02:51.547 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:02:51.804 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:51.804 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:02:51.804 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:51.804 [215/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:51.804 [216/740] Linking static target lib/librte_cfgfile.a 00:02:51.804 [217/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:51.804 [218/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.060 [219/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:52.061 [220/740] Generating lib/rte_compressdev_def with a custom command 00:02:52.061 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:52.061 [222/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:52.318 [223/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.318 [224/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:52.318 [225/740] Linking static target lib/librte_bpf.a 00:02:52.318 [226/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:52.318 [227/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:52.318 [228/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:52.318 [229/740] Generating lib/rte_cryptodev_def with a custom command 00:02:52.318 [230/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:52.318 [231/740] Linking static target lib/librte_acl.a 00:02:52.318 [232/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.318 [233/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:52.318 [234/740] Linking static target lib/librte_compressdev.a 00:02:52.576 [235/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.576 [236/740] Generating lib/rte_distributor_def with a custom command 00:02:52.576 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:02:52.576 [238/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.576 [239/740] Generating lib/rte_efd_def with a custom command 00:02:52.576 [240/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:52.576 [241/740] Generating lib/rte_efd_mingw with a custom command 00:02:52.833 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:52.833 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:52.833 [244/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:53.090 [245/740] Linking static target lib/librte_distributor.a 00:02:53.090 [246/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:53.090 [247/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:53.090 [248/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.090 [249/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.346 [250/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:53.346 [251/740] Generating lib/rte_eventdev_def with a custom command 00:02:53.603 [252/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:53.603 [253/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:53.603 [254/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:53.603 [255/740] Linking static target lib/librte_efd.a 00:02:53.603 [256/740] Generating lib/rte_gpudev_def with a custom command 00:02:53.861 [257/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:53.861 [258/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.861 [259/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:54.117 [260/740] Linking static target lib/librte_cryptodev.a 00:02:54.117 [261/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:54.117 [262/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:54.117 [263/740] Linking static target lib/librte_gpudev.a 00:02:54.117 [264/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:54.117 [265/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:54.373 [266/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:54.373 [267/740] Generating lib/rte_gro_def with a custom command 00:02:54.373 [268/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.373 [269/740] Generating lib/rte_gro_mingw with a custom command 00:02:54.373 [270/740] Linking target lib/librte_eal.so.23.0 00:02:54.373 [271/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:54.373 [272/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:54.630 [273/740] Linking target lib/librte_ring.so.23.0 00:02:54.630 [274/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:54.630 [275/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.630 [276/740] Linking target lib/librte_meter.so.23.0 00:02:54.630 [277/740] Linking target lib/librte_pci.so.23.0 00:02:54.630 [278/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:54.630 [279/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:54.630 [280/740] Linking target lib/librte_timer.so.23.0 00:02:54.630 [281/740] Linking target lib/librte_rcu.so.23.0 00:02:54.630 [282/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:54.630 [283/740] Linking target lib/librte_mempool.so.23.0 00:02:54.630 [284/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:54.630 [285/740] Linking target lib/librte_acl.so.23.0 00:02:54.887 [286/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:54.887 [287/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:54.887 [288/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:54.887 [289/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:54.887 [290/740] Linking static target lib/librte_gro.a 00:02:54.887 [291/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:54.887 [292/740] Linking target lib/librte_cfgfile.so.23.0 00:02:54.887 [293/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:54.887 [294/740] Linking static target lib/librte_eventdev.a 00:02:54.887 [295/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.887 [296/740] Linking target lib/librte_mbuf.so.23.0 00:02:54.887 [297/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:54.887 [298/740] Generating lib/rte_gso_def with a custom command 00:02:54.887 [299/740] Generating lib/rte_gso_mingw with a custom command 00:02:54.887 [300/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:54.887 [301/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:54.887 [302/740] Linking target lib/librte_net.so.23.0 00:02:54.887 [303/740] Linking target lib/librte_bbdev.so.23.0 00:02:54.887 [304/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.144 [305/740] Linking target lib/librte_compressdev.so.23.0 00:02:55.144 [306/740] Linking target lib/librte_distributor.so.23.0 00:02:55.144 [307/740] Linking target lib/librte_gpudev.so.23.0 00:02:55.144 [308/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:55.144 [309/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:55.144 [310/740] Linking target lib/librte_ethdev.so.23.0 00:02:55.144 [311/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:55.144 [312/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:55.144 [313/740] Linking target lib/librte_cmdline.so.23.0 00:02:55.144 [314/740] Linking target lib/librte_hash.so.23.0 00:02:55.144 [315/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:55.144 [316/740] Linking static target lib/librte_gso.a 00:02:55.144 [317/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:55.402 [318/740] Linking target lib/librte_metrics.so.23.0 00:02:55.402 [319/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:55.402 [320/740] Linking target lib/librte_bpf.so.23.0 00:02:55.402 [321/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.402 [322/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:55.402 [323/740] Linking target lib/librte_efd.so.23.0 00:02:55.402 [324/740] Linking target lib/librte_bitratestats.so.23.0 00:02:55.402 [325/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:55.402 [326/740] Linking target lib/librte_gro.so.23.0 00:02:55.402 [327/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:55.402 [328/740] Generating lib/rte_ip_frag_def with a custom command 00:02:55.402 [329/740] Linking target lib/librte_gso.so.23.0 00:02:55.402 [330/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:55.402 [331/740] Generating lib/rte_jobstats_def with a custom command 00:02:55.659 [332/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:55.659 [333/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:55.659 [334/740] Generating lib/rte_latencystats_def with a custom command 00:02:55.659 [335/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:55.659 [336/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:55.659 [337/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:55.659 [338/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:55.659 [339/740] Linking static target lib/librte_jobstats.a 00:02:55.659 [340/740] Generating lib/rte_lpm_def with a custom command 00:02:55.659 [341/740] Generating lib/rte_lpm_mingw with a custom command 00:02:55.659 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:55.916 [343/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.916 [344/740] Linking target lib/librte_jobstats.so.23.0 00:02:55.916 [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:55.916 [346/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:55.916 [347/740] Linking static target lib/librte_ip_frag.a 00:02:55.916 [348/740] Linking static target lib/librte_latencystats.a 00:02:55.916 [349/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:56.172 [350/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.172 [351/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:56.172 [352/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:56.172 [353/740] Linking target lib/librte_cryptodev.so.23.0 00:02:56.172 [354/740] Generating lib/rte_member_def with a custom command 00:02:56.172 [355/740] Generating lib/rte_member_mingw with a custom command 00:02:56.172 [356/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:56.172 [357/740] Generating lib/rte_pcapng_def with a custom command 00:02:56.172 [358/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.172 [359/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:56.172 [360/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:56.172 [361/740] Linking target lib/librte_latencystats.so.23.0 00:02:56.172 [362/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.429 [363/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:56.429 [364/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:56.429 [365/740] Linking target lib/librte_ip_frag.so.23.0 00:02:56.429 [366/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:56.429 [367/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:56.429 [368/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:56.429 [369/740] Linking static target lib/librte_lpm.a 00:02:56.429 [370/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:56.429 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:56.687 [372/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:56.687 [373/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.687 [374/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:56.687 [375/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:56.687 [376/740] Linking target lib/librte_eventdev.so.23.0 00:02:56.687 [377/740] Generating lib/rte_power_def with a custom command 00:02:56.687 [378/740] Generating lib/rte_power_mingw with a custom command 00:02:56.687 [379/740] Generating lib/rte_rawdev_def with a custom command 00:02:56.687 [380/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:56.687 [381/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:56.687 [382/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.687 [383/740] Linking static target lib/librte_pcapng.a 00:02:56.687 [384/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:56.687 [385/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:56.946 [386/740] Generating lib/rte_regexdev_def with a custom command 00:02:56.946 [387/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:56.946 [388/740] Linking target lib/librte_lpm.so.23.0 00:02:56.946 [389/740] Generating lib/rte_dmadev_def with a custom command 00:02:56.946 [390/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:56.946 [391/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:56.946 [392/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:56.946 [393/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:56.946 [394/740] Generating lib/rte_rib_def with a custom command 00:02:56.946 [395/740] Generating lib/rte_rib_mingw with a custom command 00:02:56.946 [396/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:56.946 [397/740] Linking static target lib/librte_rawdev.a 00:02:56.946 [398/740] Generating lib/rte_reorder_def with a custom command 00:02:56.946 [399/740] Generating lib/rte_reorder_mingw with a custom command 00:02:56.946 [400/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.946 [401/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:57.203 [402/740] Linking target lib/librte_pcapng.so.23.0 00:02:57.203 [403/740] Linking static target lib/librte_power.a 00:02:57.203 [404/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:57.203 [405/740] Linking static target lib/librte_dmadev.a 00:02:57.203 [406/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:57.203 [407/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:57.203 [408/740] Linking static target lib/librte_regexdev.a 00:02:57.203 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:57.509 [410/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:57.509 [411/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:57.509 [412/740] Generating lib/rte_sched_def with a custom command 00:02:57.509 [413/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.509 [414/740] Generating lib/rte_sched_mingw with a custom command 00:02:57.509 [415/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:57.509 [416/740] Linking target lib/librte_rawdev.so.23.0 00:02:57.509 [417/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:57.509 [418/740] Linking static target lib/librte_reorder.a 00:02:57.509 [419/740] Generating lib/rte_security_def with a custom command 00:02:57.509 [420/740] Generating lib/rte_security_mingw with a custom command 00:02:57.509 [421/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:57.509 [422/740] Linking static target lib/librte_member.a 00:02:57.509 [423/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:57.509 [424/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:57.509 [425/740] Generating lib/rte_stack_def with a custom command 00:02:57.766 [426/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.766 [427/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:57.766 [428/740] Generating lib/rte_stack_mingw with a custom command 00:02:57.766 [429/740] Linking static target lib/librte_rib.a 00:02:57.766 [430/740] Linking target lib/librte_dmadev.so.23.0 00:02:57.766 [431/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:57.766 [432/740] Linking static target lib/librte_stack.a 00:02:57.766 [433/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.766 [434/740] Linking target lib/librte_reorder.so.23.0 00:02:57.766 [435/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:57.766 [436/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:57.766 [437/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.766 [438/740] Linking target lib/librte_stack.so.23.0 00:02:58.023 [439/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.023 [440/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.023 [441/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.023 [442/740] Linking target lib/librte_member.so.23.0 00:02:58.023 [443/740] Linking target lib/librte_regexdev.so.23.0 00:02:58.023 [444/740] Linking target lib/librte_power.so.23.0 00:02:58.023 [445/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:58.023 [446/740] Linking static target lib/librte_security.a 00:02:58.023 [447/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.023 [448/740] Linking target lib/librte_rib.so.23.0 00:02:58.280 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:58.280 [450/740] Generating lib/rte_vhost_def with a custom command 00:02:58.280 [451/740] Generating lib/rte_vhost_mingw with a custom command 00:02:58.280 [452/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:58.281 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:58.281 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.281 [455/740] Linking target lib/librte_security.so.23.0 00:02:58.281 [456/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:58.538 [457/740] Linking static target lib/librte_sched.a 00:02:58.538 [458/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:58.538 [459/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:58.795 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:58.795 [461/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.795 [462/740] Linking target lib/librte_sched.so.23.0 00:02:58.795 [463/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:58.795 [464/740] Generating lib/rte_ipsec_def with a custom command 00:02:58.795 [465/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:58.795 [466/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:58.795 [467/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:59.053 [468/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:59.053 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:59.053 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:59.053 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:59.310 [472/740] Generating lib/rte_fib_def with a custom command 00:02:59.310 [473/740] Generating lib/rte_fib_mingw with a custom command 00:02:59.310 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:59.567 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:59.567 [476/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:59.567 [477/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:59.567 [478/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:59.567 [479/740] Linking static target lib/librte_ipsec.a 00:02:59.825 [480/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:59.825 [481/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:59.825 [482/740] Linking static target lib/librte_fib.a 00:02:59.825 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:59.825 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:00.083 [485/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.083 [486/740] Linking target lib/librte_ipsec.so.23.0 00:03:00.083 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:00.083 [488/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.083 [489/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:00.083 [490/740] Linking target lib/librte_fib.so.23.0 00:03:00.083 [491/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:00.649 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:00.649 [493/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:00.649 [494/740] Generating lib/rte_port_def with a custom command 00:03:00.649 [495/740] Generating lib/rte_port_mingw with a custom command 00:03:00.649 [496/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:00.649 [497/740] Generating lib/rte_pdump_def with a custom command 00:03:00.649 [498/740] Generating lib/rte_pdump_mingw with a custom command 00:03:00.649 [499/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:00.649 [500/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:00.649 [501/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:00.906 [502/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:00.906 [503/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:00.906 [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:00.906 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:01.164 [506/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:01.164 [507/740] Linking static target lib/librte_port.a 00:03:01.164 [508/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:01.164 [509/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:01.164 [510/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:01.422 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:01.422 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:01.422 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:01.422 [514/740] Linking static target lib/librte_pdump.a 00:03:01.681 [515/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.681 [516/740] Linking target lib/librte_port.so.23.0 00:03:01.681 [517/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.681 [518/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:01.681 [519/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:01.681 [520/740] Linking target lib/librte_pdump.so.23.0 00:03:01.681 [521/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:01.939 [522/740] Generating lib/rte_table_def with a custom command 00:03:01.939 [523/740] Generating lib/rte_table_mingw with a custom command 00:03:01.939 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:01.939 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:02.197 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:02.197 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:02.197 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:02.197 [529/740] Generating lib/rte_pipeline_def with a custom command 00:03:02.197 [530/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:02.197 [531/740] Generating lib/rte_pipeline_mingw with a custom command 00:03:02.197 [532/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:02.197 [533/740] Linking static target lib/librte_table.a 00:03:02.455 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:02.714 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:02.714 [536/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:02.714 [537/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:02.714 [538/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.972 [539/740] Linking target lib/librte_table.so.23.0 00:03:02.972 [540/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:02.972 [541/740] Generating lib/rte_graph_def with a custom command 00:03:02.972 [542/740] Generating lib/rte_graph_mingw with a custom command 00:03:02.972 [543/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:02.972 [544/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:03.231 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:03.231 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:03.231 [547/740] Linking static target lib/librte_graph.a 00:03:03.231 [548/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:03.231 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:03.489 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:03.489 [551/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:03.489 [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:03.748 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:03.748 [554/740] Generating lib/rte_node_def with a custom command 00:03:03.748 [555/740] Generating lib/rte_node_mingw with a custom command 00:03:03.748 [556/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:04.006 [557/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:04.006 [558/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:04.006 [559/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.006 [560/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:04.006 [561/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.006 [562/740] Linking target lib/librte_graph.so.23.0 00:03:04.006 [563/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.006 [564/740] Generating drivers/rte_bus_pci_def with a custom command 00:03:04.006 [565/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:04.006 [566/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:04.006 [567/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:04.006 [568/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:04.265 [569/740] Generating drivers/rte_bus_vdev_def with a custom command 00:03:04.265 [570/740] Linking static target lib/librte_node.a 00:03:04.265 [571/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:04.265 [572/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.265 [573/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:04.265 [574/740] Generating drivers/rte_mempool_ring_def with a custom command 00:03:04.265 [575/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:04.265 [576/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:04.265 [577/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.265 [578/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.265 [579/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.265 [580/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.265 [581/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:04.265 [582/740] Linking target lib/librte_node.so.23.0 00:03:04.524 [583/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:04.524 [584/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:04.524 [585/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.524 [586/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.524 [587/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.524 [588/740] Linking static target drivers/librte_bus_pci.a 00:03:04.524 [589/740] Linking static target drivers/librte_bus_vdev.a 00:03:04.798 [590/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:04.798 [591/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.798 [592/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.798 [593/740] Linking target drivers/librte_bus_vdev.so.23.0 00:03:04.798 [594/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:04.798 [595/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.798 [596/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:04.798 [597/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:04.798 [598/740] Linking target drivers/librte_bus_pci.so.23.0 00:03:05.057 [599/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:05.057 [600/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:05.057 [601/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:05.057 [602/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:05.057 [603/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.057 [604/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.057 [605/740] Linking static target drivers/librte_mempool_ring.a 00:03:05.315 [606/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.315 [607/740] Linking target drivers/librte_mempool_ring.so.23.0 00:03:05.315 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:05.574 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:05.833 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:05.833 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:06.091 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:06.349 [613/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:06.349 [614/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:06.349 [615/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:06.607 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:06.608 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:06.866 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:06.866 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:03:06.866 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:06.866 [621/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:07.125 [622/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:07.384 [623/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:07.952 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:07.952 [625/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:07.952 [626/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:07.952 [627/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:07.952 [628/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:07.952 [629/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:07.952 [630/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:07.952 [631/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:08.210 [632/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:08.210 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:08.467 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:08.724 [635/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:08.724 [636/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:08.724 [637/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:08.724 [638/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:08.724 [639/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:08.982 [640/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:08.982 [641/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:08.982 [642/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:08.982 [643/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:08.982 [644/740] Linking static target drivers/librte_net_i40e.a 00:03:08.982 [645/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:08.982 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:09.239 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:09.239 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:09.496 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:09.496 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:09.496 [651/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.496 [652/740] Linking target drivers/librte_net_i40e.so.23.0 00:03:09.753 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:09.753 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:09.753 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:09.753 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:09.753 [657/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:09.753 [658/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:10.010 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:10.010 [660/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:10.010 [661/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:10.010 [662/740] Linking static target lib/librte_vhost.a 00:03:10.301 [663/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:10.301 [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:10.301 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:10.301 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:10.301 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:10.558 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:10.815 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:11.072 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:11.072 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:11.072 [672/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.072 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:11.072 [674/740] Linking target lib/librte_vhost.so.23.0 00:03:11.330 [675/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:11.330 [676/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:11.330 [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:11.588 [678/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:11.588 [679/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:11.588 [680/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:11.588 [681/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:11.847 [682/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:11.847 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:11.847 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:11.847 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:12.105 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:12.105 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:12.106 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:12.106 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:12.106 [690/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:12.365 [691/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:12.365 [692/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:12.365 [693/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:12.623 [694/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:12.623 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:12.883 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:13.142 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:13.142 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:13.142 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:13.142 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:13.400 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:13.659 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:13.659 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:13.659 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:13.659 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:13.918 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:13.918 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:14.184 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:14.184 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:14.501 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:14.501 [711/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:14.501 [712/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:14.501 [713/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:14.787 [714/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:14.787 [715/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:14.787 [716/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:14.787 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:15.045 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:15.304 [719/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:15.304 [720/740] Linking static target lib/librte_pipeline.a 00:03:15.304 [721/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:15.562 [722/740] Linking target app/dpdk-dumpcap 00:03:15.562 [723/740] Linking target app/dpdk-pdump 00:03:15.562 [724/740] Linking target app/dpdk-proc-info 00:03:15.562 [725/740] Linking target app/dpdk-test-cmdline 00:03:15.820 [726/740] Linking target app/dpdk-test-bbdev 00:03:15.820 [727/740] Linking target app/dpdk-test-compress-perf 00:03:15.820 [728/740] Linking target app/dpdk-test-crypto-perf 00:03:15.820 [729/740] Linking target app/dpdk-test-eventdev 00:03:15.820 [730/740] Linking target app/dpdk-test-acl 00:03:16.078 [731/740] Linking target app/dpdk-test-gpudev 00:03:16.078 [732/740] Linking target app/dpdk-test-fib 00:03:16.078 [733/740] Linking target app/dpdk-test-flow-perf 00:03:16.078 [734/740] Linking target app/dpdk-test-regex 00:03:16.078 [735/740] Linking target app/dpdk-test-pipeline 00:03:16.078 [736/740] Linking target app/dpdk-test-sad 00:03:16.078 [737/740] Linking target app/dpdk-test-security-perf 00:03:16.078 [738/740] Linking target app/dpdk-testpmd 00:03:20.257 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.515 [740/740] Linking target lib/librte_pipeline.so.23.0 00:03:20.515 15:38:55 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:20.515 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:20.515 [0/1] Installing files. 00:03:20.776 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.777 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:20.778 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:20.778 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.778 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.778 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.778 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.778 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.778 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.778 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.778 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.778 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.778 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.778 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.778 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.778 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.778 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:20.778 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.039 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:21.040 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:21.040 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:21.040 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.040 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:21.040 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.040 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.041 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:21.042 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:21.042 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:21.042 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:21.042 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:21.042 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:21.042 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:21.042 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:21.042 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:21.042 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:21.042 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:21.042 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:21.042 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:21.042 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:21.042 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:21.042 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:21.042 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:21.042 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:21.042 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:21.042 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:21.042 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:21.042 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:21.042 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:21.042 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:21.042 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:21.042 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:21.042 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:21.042 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:21.042 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:21.042 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:21.042 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:21.042 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:21.042 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:21.042 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:21.042 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:21.042 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:21.042 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:21.042 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:21.042 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:21.042 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:21.042 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:21.042 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:21.042 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:21.042 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:21.042 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:21.042 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:21.042 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:21.042 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:21.042 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:21.042 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:21.042 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:21.042 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:21.042 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:21.042 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:21.042 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:21.042 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:21.042 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:21.042 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:21.042 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:21.042 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:21.042 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:21.042 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:21.042 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:21.042 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:21.042 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:21.042 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:21.042 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:21.042 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:21.042 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:21.042 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:21.042 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:21.042 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:21.042 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:21.042 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:21.042 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:21.042 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:21.042 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:21.042 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:21.042 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:21.042 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:21.042 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:21.042 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:21.042 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:21.042 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:21.042 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:21.042 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:21.042 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:21.042 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:21.042 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:21.043 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:21.043 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:21.043 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:21.043 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:21.043 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:21.043 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:21.043 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:21.043 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:21.043 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:21.043 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:21.043 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:21.043 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:21.043 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:21.043 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:21.043 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:21.043 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:21.043 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:21.043 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:21.043 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:21.043 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:21.043 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:21.043 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:21.043 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:21.043 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:21.043 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:21.043 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:21.043 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:21.043 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:21.043 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:21.043 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:21.043 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:21.043 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:21.043 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:21.043 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:21.043 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:21.043 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:21.043 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:21.043 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:21.301 15:38:55 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:03:21.301 15:38:55 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:21.301 15:38:55 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:03:21.301 15:38:55 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:21.301 00:03:21.301 real 0m42.235s 00:03:21.301 user 4m7.831s 00:03:21.301 sys 0m55.162s 00:03:21.301 15:38:55 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:21.301 ************************************ 00:03:21.301 END TEST build_native_dpdk 00:03:21.301 ************************************ 00:03:21.301 15:38:55 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:21.301 15:38:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:21.301 15:38:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:21.301 15:38:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:21.301 15:38:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:21.301 15:38:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:21.301 15:38:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:21.301 15:38:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:21.301 15:38:55 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:21.301 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:21.560 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.560 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:21.560 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:21.818 Using 'verbs' RDMA provider 00:03:38.076 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:52.950 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:53.518 Creating mk/config.mk...done. 00:03:53.518 Creating mk/cc.flags.mk...done. 00:03:53.518 Type 'make' to build. 00:03:53.518 15:39:28 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:53.518 15:39:28 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:53.518 15:39:28 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:53.518 15:39:28 -- common/autotest_common.sh@10 -- $ set +x 00:03:53.518 ************************************ 00:03:53.518 START TEST make 00:03:53.518 ************************************ 00:03:53.518 15:39:28 make -- common/autotest_common.sh@1121 -- $ make -j10 00:03:53.777 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:53.777 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:53.777 meson setup builddir \ 00:03:53.777 -Dwith-libaio=enabled \ 00:03:53.777 -Dwith-liburing=enabled \ 00:03:53.777 -Dwith-libvfn=disabled \ 00:03:53.777 -Dwith-spdk=false && \ 00:03:53.777 meson compile -C builddir && \ 00:03:53.777 cd -) 00:03:53.777 make[1]: Nothing to be done for 'all'. 00:03:56.311 The Meson build system 00:03:56.311 Version: 1.3.1 00:03:56.311 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:56.311 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:56.311 Build type: native build 00:03:56.311 Project name: xnvme 00:03:56.311 Project version: 0.7.3 00:03:56.311 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:56.311 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:56.311 Host machine cpu family: x86_64 00:03:56.311 Host machine cpu: x86_64 00:03:56.311 Message: host_machine.system: linux 00:03:56.311 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:56.311 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:56.311 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:56.311 Run-time dependency threads found: YES 00:03:56.311 Has header "setupapi.h" : NO 00:03:56.311 Has header "linux/blkzoned.h" : YES 00:03:56.311 Has header "linux/blkzoned.h" : YES (cached) 00:03:56.311 Has header "libaio.h" : YES 00:03:56.311 Library aio found: YES 00:03:56.311 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:56.311 Run-time dependency liburing found: YES 2.2 00:03:56.311 Dependency libvfn skipped: feature with-libvfn disabled 00:03:56.311 Run-time dependency appleframeworks found: NO (tried framework) 00:03:56.311 Run-time dependency appleframeworks found: NO (tried framework) 00:03:56.311 Configuring xnvme_config.h using configuration 00:03:56.311 Configuring xnvme.spec using configuration 00:03:56.311 Run-time dependency bash-completion found: YES 2.11 00:03:56.311 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:56.311 Program cp found: YES (/usr/bin/cp) 00:03:56.311 Has header "winsock2.h" : NO 00:03:56.311 Has header "dbghelp.h" : NO 00:03:56.311 Library rpcrt4 found: NO 00:03:56.311 Library rt found: YES 00:03:56.311 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:56.311 Found CMake: /usr/bin/cmake (3.27.7) 00:03:56.311 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:56.311 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:56.311 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:56.311 Build targets in project: 32 00:03:56.311 00:03:56.311 xnvme 0.7.3 00:03:56.311 00:03:56.311 User defined options 00:03:56.311 with-libaio : enabled 00:03:56.311 with-liburing: enabled 00:03:56.311 with-libvfn : disabled 00:03:56.311 with-spdk : false 00:03:56.311 00:03:56.311 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:56.311 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:56.311 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:56.311 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:56.311 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:56.311 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:56.311 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:56.311 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:56.311 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:56.311 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:56.311 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:56.311 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:56.311 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:56.311 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:56.311 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:56.570 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:56.570 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:56.570 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:56.570 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:56.570 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:56.570 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:56.570 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:56.570 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:56.570 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:56.570 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:56.570 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:56.570 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:56.570 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:56.570 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:56.570 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:56.570 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:56.570 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:56.570 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:56.570 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:56.570 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:56.570 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:56.570 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:56.570 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:56.570 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:56.570 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:56.570 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:56.570 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:56.570 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:56.570 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:56.570 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:56.570 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:56.570 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:56.570 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:56.570 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:56.570 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:56.570 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:56.570 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:56.571 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:56.829 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:56.829 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:56.829 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:56.829 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:56.829 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:56.829 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:56.829 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:56.829 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:56.829 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:56.829 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:56.829 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:56.829 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:56.829 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:56.829 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:56.829 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:56.829 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:56.829 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:56.829 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:56.829 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:56.829 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:56.829 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:57.088 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:57.088 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:57.088 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:57.088 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:57.088 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:57.088 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:57.088 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:57.088 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:57.088 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:57.088 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:57.088 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:57.088 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:57.088 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:57.088 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:57.088 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:57.088 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:57.088 [89/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:57.348 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:57.348 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:57.348 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:57.348 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:57.348 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:57.348 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:57.348 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:57.348 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:57.348 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:57.348 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:57.348 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:57.348 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:57.348 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:57.348 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:57.348 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:57.348 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:57.348 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:57.348 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:57.348 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:57.348 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:57.348 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:57.348 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:57.348 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:57.348 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:57.348 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:57.348 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:57.348 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:57.348 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:57.348 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:57.348 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:57.348 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:57.348 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:57.348 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:57.607 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:57.607 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:57.607 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:57.607 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:57.607 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:57.607 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:57.607 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:57.607 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:57.607 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:57.607 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:57.607 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:57.607 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:57.607 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:57.607 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:57.607 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:57.607 [138/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:57.607 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:57.607 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:57.607 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:57.607 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:57.607 [143/203] Linking target lib/libxnvme.so 00:03:57.868 [144/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:57.868 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:57.868 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:57.868 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:57.868 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:57.868 [149/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:57.868 [150/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:57.868 [151/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:57.868 [152/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:57.868 [153/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:57.868 [154/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:57.868 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:57.868 [156/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:57.868 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:57.868 [158/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:57.868 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:58.126 [160/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:58.126 [161/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:58.126 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:58.126 [163/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:58.126 [164/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:58.126 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:58.126 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:58.126 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:58.126 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:58.126 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:58.126 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:58.126 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:58.126 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:58.126 [173/203] Linking static target lib/libxnvme.a 00:03:58.383 [174/203] Linking target tests/xnvme_tests_cli 00:03:58.384 [175/203] Linking target tests/xnvme_tests_enum 00:03:58.384 [176/203] Linking target tests/xnvme_tests_znd_append 00:03:58.384 [177/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:58.384 [178/203] Linking target tests/xnvme_tests_async_intf 00:03:58.384 [179/203] Linking target tests/xnvme_tests_buf 00:03:58.384 [180/203] Linking target tests/xnvme_tests_lblk 00:03:58.384 [181/203] Linking target tests/xnvme_tests_scc 00:03:58.384 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:58.384 [183/203] Linking target tests/xnvme_tests_znd_state 00:03:58.384 [184/203] Linking target tests/xnvme_tests_ioworker 00:03:58.384 [185/203] Linking target tests/xnvme_tests_xnvme_file 00:03:58.384 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:58.384 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:58.384 [188/203] Linking target tools/xnvme 00:03:58.384 [189/203] Linking target tests/xnvme_tests_map 00:03:58.384 [190/203] Linking target tests/xnvme_tests_kvs 00:03:58.384 [191/203] Linking target tools/xnvme_file 00:03:58.384 [192/203] Linking target tools/lblk 00:03:58.384 [193/203] Linking target tools/xdd 00:03:58.384 [194/203] Linking target tools/zoned 00:03:58.384 [195/203] Linking target examples/xnvme_enum 00:03:58.384 [196/203] Linking target examples/xnvme_hello 00:03:58.384 [197/203] Linking target examples/xnvme_dev 00:03:58.384 [198/203] Linking target tools/kvs 00:03:58.384 [199/203] Linking target examples/xnvme_single_sync 00:03:58.384 [200/203] Linking target examples/xnvme_single_async 00:03:58.384 [201/203] Linking target examples/xnvme_io_async 00:03:58.384 [202/203] Linking target examples/zoned_io_sync 00:03:58.384 [203/203] Linking target examples/zoned_io_async 00:03:58.384 INFO: autodetecting backend as ninja 00:03:58.384 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:58.384 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:16.449 CC lib/log/log.o 00:04:16.449 CC lib/log/log_flags.o 00:04:16.449 CC lib/log/log_deprecated.o 00:04:16.449 CC lib/ut_mock/mock.o 00:04:16.449 CC lib/ut/ut.o 00:04:16.449 LIB libspdk_log.a 00:04:16.449 LIB libspdk_ut_mock.a 00:04:16.449 LIB libspdk_ut.a 00:04:16.449 SO libspdk_log.so.7.0 00:04:16.449 SO libspdk_ut.so.2.0 00:04:16.449 SO libspdk_ut_mock.so.6.0 00:04:16.449 SYMLINK libspdk_ut.so 00:04:16.449 SYMLINK libspdk_log.so 00:04:16.449 SYMLINK libspdk_ut_mock.so 00:04:16.449 CXX lib/trace_parser/trace.o 00:04:16.449 CC lib/util/base64.o 00:04:16.449 CC lib/dma/dma.o 00:04:16.449 CC lib/util/crc16.o 00:04:16.449 CC lib/util/cpuset.o 00:04:16.449 CC lib/util/bit_array.o 00:04:16.449 CC lib/util/crc32c.o 00:04:16.449 CC lib/ioat/ioat.o 00:04:16.449 CC lib/util/crc32.o 00:04:16.449 CC lib/vfio_user/host/vfio_user_pci.o 00:04:16.449 CC lib/vfio_user/host/vfio_user.o 00:04:16.449 CC lib/util/crc32_ieee.o 00:04:16.449 CC lib/util/crc64.o 00:04:16.449 LIB libspdk_dma.a 00:04:16.449 CC lib/util/dif.o 00:04:16.449 CC lib/util/fd.o 00:04:16.449 SO libspdk_dma.so.4.0 00:04:16.449 CC lib/util/file.o 00:04:16.449 CC lib/util/hexlify.o 00:04:16.449 SYMLINK libspdk_dma.so 00:04:16.449 CC lib/util/iov.o 00:04:16.449 LIB libspdk_ioat.a 00:04:16.449 CC lib/util/math.o 00:04:16.449 SO libspdk_ioat.so.7.0 00:04:16.449 CC lib/util/pipe.o 00:04:16.449 CC lib/util/strerror_tls.o 00:04:16.449 LIB libspdk_vfio_user.a 00:04:16.449 SYMLINK libspdk_ioat.so 00:04:16.449 CC lib/util/string.o 00:04:16.449 CC lib/util/uuid.o 00:04:16.449 CC lib/util/fd_group.o 00:04:16.449 SO libspdk_vfio_user.so.5.0 00:04:16.449 CC lib/util/xor.o 00:04:16.449 CC lib/util/zipf.o 00:04:16.449 SYMLINK libspdk_vfio_user.so 00:04:16.449 LIB libspdk_util.a 00:04:16.449 SO libspdk_util.so.9.0 00:04:16.449 LIB libspdk_trace_parser.a 00:04:16.449 SO libspdk_trace_parser.so.5.0 00:04:16.449 SYMLINK libspdk_util.so 00:04:16.449 SYMLINK libspdk_trace_parser.so 00:04:16.449 CC lib/vmd/vmd.o 00:04:16.449 CC lib/vmd/led.o 00:04:16.449 CC lib/env_dpdk/env.o 00:04:16.449 CC lib/env_dpdk/memory.o 00:04:16.449 CC lib/env_dpdk/pci.o 00:04:16.449 CC lib/json/json_parse.o 00:04:16.449 CC lib/env_dpdk/init.o 00:04:16.449 CC lib/idxd/idxd.o 00:04:16.449 CC lib/conf/conf.o 00:04:16.449 CC lib/rdma/common.o 00:04:16.449 CC lib/env_dpdk/threads.o 00:04:16.449 LIB libspdk_conf.a 00:04:16.449 CC lib/json/json_util.o 00:04:16.449 SO libspdk_conf.so.6.0 00:04:16.449 CC lib/idxd/idxd_user.o 00:04:16.449 CC lib/rdma/rdma_verbs.o 00:04:16.449 SYMLINK libspdk_conf.so 00:04:16.449 CC lib/idxd/idxd_kernel.o 00:04:16.449 CC lib/env_dpdk/pci_ioat.o 00:04:16.449 CC lib/json/json_write.o 00:04:16.449 CC lib/env_dpdk/pci_virtio.o 00:04:16.449 CC lib/env_dpdk/pci_vmd.o 00:04:16.449 CC lib/env_dpdk/pci_idxd.o 00:04:16.449 CC lib/env_dpdk/pci_event.o 00:04:16.449 CC lib/env_dpdk/sigbus_handler.o 00:04:16.449 LIB libspdk_rdma.a 00:04:16.449 CC lib/env_dpdk/pci_dpdk.o 00:04:16.449 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:16.449 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:16.449 SO libspdk_rdma.so.6.0 00:04:16.449 LIB libspdk_json.a 00:04:16.449 LIB libspdk_idxd.a 00:04:16.449 SYMLINK libspdk_rdma.so 00:04:16.449 SO libspdk_json.so.6.0 00:04:16.449 SO libspdk_idxd.so.12.0 00:04:16.449 LIB libspdk_vmd.a 00:04:16.449 SYMLINK libspdk_json.so 00:04:16.449 SO libspdk_vmd.so.6.0 00:04:16.449 SYMLINK libspdk_idxd.so 00:04:16.449 SYMLINK libspdk_vmd.so 00:04:16.449 CC lib/jsonrpc/jsonrpc_server.o 00:04:16.449 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:16.450 CC lib/jsonrpc/jsonrpc_client.o 00:04:16.450 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:16.708 LIB libspdk_jsonrpc.a 00:04:16.708 SO libspdk_jsonrpc.so.6.0 00:04:16.708 LIB libspdk_env_dpdk.a 00:04:16.708 SYMLINK libspdk_jsonrpc.so 00:04:16.967 SO libspdk_env_dpdk.so.14.0 00:04:16.967 SYMLINK libspdk_env_dpdk.so 00:04:17.226 CC lib/rpc/rpc.o 00:04:17.226 LIB libspdk_rpc.a 00:04:17.485 SO libspdk_rpc.so.6.0 00:04:17.485 SYMLINK libspdk_rpc.so 00:04:17.744 CC lib/trace/trace.o 00:04:17.744 CC lib/trace/trace_flags.o 00:04:18.003 CC lib/trace/trace_rpc.o 00:04:18.003 CC lib/keyring/keyring.o 00:04:18.003 CC lib/notify/notify.o 00:04:18.003 CC lib/keyring/keyring_rpc.o 00:04:18.003 CC lib/notify/notify_rpc.o 00:04:18.003 LIB libspdk_notify.a 00:04:18.003 SO libspdk_notify.so.6.0 00:04:18.003 LIB libspdk_keyring.a 00:04:18.264 LIB libspdk_trace.a 00:04:18.264 SYMLINK libspdk_notify.so 00:04:18.264 SO libspdk_keyring.so.1.0 00:04:18.264 SO libspdk_trace.so.10.0 00:04:18.264 SYMLINK libspdk_keyring.so 00:04:18.264 SYMLINK libspdk_trace.so 00:04:18.844 CC lib/thread/iobuf.o 00:04:18.844 CC lib/thread/thread.o 00:04:18.844 CC lib/sock/sock.o 00:04:18.844 CC lib/sock/sock_rpc.o 00:04:19.101 LIB libspdk_sock.a 00:04:19.102 SO libspdk_sock.so.9.0 00:04:19.360 SYMLINK libspdk_sock.so 00:04:19.617 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:19.617 CC lib/nvme/nvme_ctrlr.o 00:04:19.617 CC lib/nvme/nvme_fabric.o 00:04:19.617 CC lib/nvme/nvme_ns_cmd.o 00:04:19.617 CC lib/nvme/nvme_ns.o 00:04:19.617 CC lib/nvme/nvme_pcie_common.o 00:04:19.617 CC lib/nvme/nvme_pcie.o 00:04:19.617 CC lib/nvme/nvme_qpair.o 00:04:19.617 CC lib/nvme/nvme.o 00:04:20.183 LIB libspdk_thread.a 00:04:20.183 SO libspdk_thread.so.10.0 00:04:20.183 CC lib/nvme/nvme_quirks.o 00:04:20.441 CC lib/nvme/nvme_transport.o 00:04:20.441 SYMLINK libspdk_thread.so 00:04:20.441 CC lib/nvme/nvme_discovery.o 00:04:20.441 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:20.441 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:20.441 CC lib/nvme/nvme_tcp.o 00:04:20.441 CC lib/nvme/nvme_opal.o 00:04:20.441 CC lib/nvme/nvme_io_msg.o 00:04:20.698 CC lib/nvme/nvme_poll_group.o 00:04:20.955 CC lib/nvme/nvme_zns.o 00:04:20.955 CC lib/accel/accel.o 00:04:20.955 CC lib/accel/accel_rpc.o 00:04:20.955 CC lib/accel/accel_sw.o 00:04:20.955 CC lib/nvme/nvme_stubs.o 00:04:21.211 CC lib/blob/blobstore.o 00:04:21.211 CC lib/blob/request.o 00:04:21.211 CC lib/init/json_config.o 00:04:21.211 CC lib/init/subsystem.o 00:04:21.211 CC lib/virtio/virtio.o 00:04:21.468 CC lib/init/subsystem_rpc.o 00:04:21.468 CC lib/init/rpc.o 00:04:21.468 CC lib/nvme/nvme_auth.o 00:04:21.468 CC lib/blob/zeroes.o 00:04:21.468 CC lib/blob/blob_bs_dev.o 00:04:21.468 CC lib/virtio/virtio_vhost_user.o 00:04:21.468 CC lib/virtio/virtio_vfio_user.o 00:04:21.468 LIB libspdk_init.a 00:04:21.468 SO libspdk_init.so.5.0 00:04:21.726 CC lib/nvme/nvme_cuse.o 00:04:21.726 SYMLINK libspdk_init.so 00:04:21.726 CC lib/nvme/nvme_rdma.o 00:04:21.726 CC lib/virtio/virtio_pci.o 00:04:21.983 CC lib/event/app.o 00:04:21.983 CC lib/event/reactor.o 00:04:21.983 CC lib/event/log_rpc.o 00:04:21.983 LIB libspdk_accel.a 00:04:21.983 SO libspdk_accel.so.15.0 00:04:21.983 CC lib/event/app_rpc.o 00:04:21.983 SYMLINK libspdk_accel.so 00:04:21.983 CC lib/event/scheduler_static.o 00:04:21.983 LIB libspdk_virtio.a 00:04:21.983 SO libspdk_virtio.so.7.0 00:04:22.240 SYMLINK libspdk_virtio.so 00:04:22.240 CC lib/bdev/bdev.o 00:04:22.240 CC lib/bdev/bdev_zone.o 00:04:22.240 CC lib/bdev/bdev_rpc.o 00:04:22.240 CC lib/bdev/part.o 00:04:22.240 CC lib/bdev/scsi_nvme.o 00:04:22.240 LIB libspdk_event.a 00:04:22.510 SO libspdk_event.so.13.0 00:04:22.510 SYMLINK libspdk_event.so 00:04:23.077 LIB libspdk_nvme.a 00:04:23.335 SO libspdk_nvme.so.13.0 00:04:23.593 SYMLINK libspdk_nvme.so 00:04:24.527 LIB libspdk_blob.a 00:04:24.527 SO libspdk_blob.so.11.0 00:04:24.785 SYMLINK libspdk_blob.so 00:04:24.785 LIB libspdk_bdev.a 00:04:25.043 SO libspdk_bdev.so.15.0 00:04:25.043 CC lib/blobfs/blobfs.o 00:04:25.043 CC lib/blobfs/tree.o 00:04:25.043 CC lib/lvol/lvol.o 00:04:25.043 SYMLINK libspdk_bdev.so 00:04:25.302 CC lib/nbd/nbd.o 00:04:25.302 CC lib/nbd/nbd_rpc.o 00:04:25.302 CC lib/scsi/dev.o 00:04:25.302 CC lib/ublk/ublk.o 00:04:25.302 CC lib/scsi/lun.o 00:04:25.302 CC lib/scsi/port.o 00:04:25.302 CC lib/nvmf/ctrlr.o 00:04:25.302 CC lib/ftl/ftl_core.o 00:04:25.560 CC lib/scsi/scsi.o 00:04:25.560 CC lib/ftl/ftl_init.o 00:04:25.560 CC lib/ftl/ftl_layout.o 00:04:25.560 CC lib/ftl/ftl_debug.o 00:04:25.560 CC lib/scsi/scsi_bdev.o 00:04:25.560 CC lib/scsi/scsi_pr.o 00:04:25.819 LIB libspdk_nbd.a 00:04:25.819 CC lib/ftl/ftl_io.o 00:04:25.819 SO libspdk_nbd.so.7.0 00:04:25.819 SYMLINK libspdk_nbd.so 00:04:25.819 CC lib/ftl/ftl_sb.o 00:04:25.819 CC lib/ftl/ftl_l2p.o 00:04:25.819 CC lib/ftl/ftl_l2p_flat.o 00:04:25.819 CC lib/ublk/ublk_rpc.o 00:04:25.819 LIB libspdk_blobfs.a 00:04:26.077 CC lib/ftl/ftl_nv_cache.o 00:04:26.077 CC lib/ftl/ftl_band.o 00:04:26.077 SO libspdk_blobfs.so.10.0 00:04:26.077 CC lib/ftl/ftl_band_ops.o 00:04:26.077 LIB libspdk_lvol.a 00:04:26.077 CC lib/scsi/scsi_rpc.o 00:04:26.077 CC lib/ftl/ftl_writer.o 00:04:26.077 SYMLINK libspdk_blobfs.so 00:04:26.077 SO libspdk_lvol.so.10.0 00:04:26.077 CC lib/ftl/ftl_rq.o 00:04:26.077 LIB libspdk_ublk.a 00:04:26.077 CC lib/ftl/ftl_reloc.o 00:04:26.077 SO libspdk_ublk.so.3.0 00:04:26.077 SYMLINK libspdk_lvol.so 00:04:26.077 CC lib/ftl/ftl_l2p_cache.o 00:04:26.335 SYMLINK libspdk_ublk.so 00:04:26.335 CC lib/ftl/ftl_p2l.o 00:04:26.335 CC lib/scsi/task.o 00:04:26.335 CC lib/ftl/mngt/ftl_mngt.o 00:04:26.335 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:26.335 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:26.335 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:26.335 LIB libspdk_scsi.a 00:04:26.335 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:26.594 SO libspdk_scsi.so.9.0 00:04:26.594 CC lib/nvmf/ctrlr_discovery.o 00:04:26.594 CC lib/nvmf/ctrlr_bdev.o 00:04:26.594 CC lib/nvmf/subsystem.o 00:04:26.594 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:26.594 SYMLINK libspdk_scsi.so 00:04:26.594 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:26.594 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:26.594 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:26.852 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:26.852 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:26.852 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:26.852 CC lib/iscsi/conn.o 00:04:26.852 CC lib/vhost/vhost.o 00:04:26.852 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:26.852 CC lib/ftl/utils/ftl_conf.o 00:04:27.110 CC lib/iscsi/init_grp.o 00:04:27.110 CC lib/iscsi/iscsi.o 00:04:27.110 CC lib/iscsi/md5.o 00:04:27.110 CC lib/iscsi/param.o 00:04:27.110 CC lib/iscsi/portal_grp.o 00:04:27.110 CC lib/iscsi/tgt_node.o 00:04:27.111 CC lib/iscsi/iscsi_subsystem.o 00:04:27.369 CC lib/ftl/utils/ftl_md.o 00:04:27.369 CC lib/iscsi/iscsi_rpc.o 00:04:27.369 CC lib/ftl/utils/ftl_mempool.o 00:04:27.369 CC lib/ftl/utils/ftl_bitmap.o 00:04:27.628 CC lib/vhost/vhost_rpc.o 00:04:27.628 CC lib/vhost/vhost_scsi.o 00:04:27.628 CC lib/iscsi/task.o 00:04:27.628 CC lib/ftl/utils/ftl_property.o 00:04:27.628 CC lib/vhost/vhost_blk.o 00:04:27.628 CC lib/nvmf/nvmf.o 00:04:27.628 CC lib/nvmf/nvmf_rpc.o 00:04:27.628 CC lib/nvmf/transport.o 00:04:27.886 CC lib/nvmf/tcp.o 00:04:27.886 CC lib/nvmf/stubs.o 00:04:27.886 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:28.143 CC lib/nvmf/mdns_server.o 00:04:28.143 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:28.143 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:28.401 CC lib/nvmf/rdma.o 00:04:28.401 CC lib/nvmf/auth.o 00:04:28.401 CC lib/vhost/rte_vhost_user.o 00:04:28.401 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:28.401 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:28.660 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:28.660 LIB libspdk_iscsi.a 00:04:28.660 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:28.660 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:28.660 SO libspdk_iscsi.so.8.0 00:04:28.660 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:28.660 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:28.660 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:28.660 CC lib/ftl/base/ftl_base_dev.o 00:04:28.919 CC lib/ftl/base/ftl_base_bdev.o 00:04:28.919 SYMLINK libspdk_iscsi.so 00:04:28.919 CC lib/ftl/ftl_trace.o 00:04:29.177 LIB libspdk_ftl.a 00:04:29.436 LIB libspdk_vhost.a 00:04:29.436 SO libspdk_ftl.so.9.0 00:04:29.436 SO libspdk_vhost.so.8.0 00:04:29.436 SYMLINK libspdk_vhost.so 00:04:29.694 SYMLINK libspdk_ftl.so 00:04:30.628 LIB libspdk_nvmf.a 00:04:30.628 SO libspdk_nvmf.so.18.0 00:04:30.886 SYMLINK libspdk_nvmf.so 00:04:31.454 CC module/env_dpdk/env_dpdk_rpc.o 00:04:31.454 CC module/accel/dsa/accel_dsa.o 00:04:31.454 CC module/blob/bdev/blob_bdev.o 00:04:31.454 CC module/accel/iaa/accel_iaa.o 00:04:31.454 CC module/sock/posix/posix.o 00:04:31.454 CC module/accel/error/accel_error.o 00:04:31.454 CC module/keyring/file/keyring.o 00:04:31.454 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:31.454 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:31.454 CC module/accel/ioat/accel_ioat.o 00:04:31.454 LIB libspdk_env_dpdk_rpc.a 00:04:31.454 SO libspdk_env_dpdk_rpc.so.6.0 00:04:31.454 SYMLINK libspdk_env_dpdk_rpc.so 00:04:31.454 CC module/accel/error/accel_error_rpc.o 00:04:31.454 LIB libspdk_scheduler_dpdk_governor.a 00:04:31.454 CC module/keyring/file/keyring_rpc.o 00:04:31.454 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:31.713 LIB libspdk_scheduler_dynamic.a 00:04:31.713 CC module/accel/dsa/accel_dsa_rpc.o 00:04:31.713 CC module/accel/ioat/accel_ioat_rpc.o 00:04:31.713 CC module/accel/iaa/accel_iaa_rpc.o 00:04:31.713 SO libspdk_scheduler_dynamic.so.4.0 00:04:31.713 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:31.713 LIB libspdk_blob_bdev.a 00:04:31.713 LIB libspdk_accel_error.a 00:04:31.713 SYMLINK libspdk_scheduler_dynamic.so 00:04:31.713 LIB libspdk_keyring_file.a 00:04:31.713 SO libspdk_blob_bdev.so.11.0 00:04:31.713 SO libspdk_keyring_file.so.1.0 00:04:31.713 LIB libspdk_accel_dsa.a 00:04:31.713 SO libspdk_accel_error.so.2.0 00:04:31.713 LIB libspdk_accel_ioat.a 00:04:31.713 LIB libspdk_accel_iaa.a 00:04:31.713 SYMLINK libspdk_blob_bdev.so 00:04:31.713 SO libspdk_accel_ioat.so.6.0 00:04:31.713 SO libspdk_accel_dsa.so.5.0 00:04:31.713 SO libspdk_accel_iaa.so.3.0 00:04:31.713 SYMLINK libspdk_keyring_file.so 00:04:31.713 SYMLINK libspdk_accel_error.so 00:04:31.713 SYMLINK libspdk_accel_ioat.so 00:04:31.713 SYMLINK libspdk_accel_iaa.so 00:04:31.713 CC module/scheduler/gscheduler/gscheduler.o 00:04:31.973 SYMLINK libspdk_accel_dsa.so 00:04:31.973 CC module/keyring/linux/keyring.o 00:04:31.973 CC module/keyring/linux/keyring_rpc.o 00:04:31.973 LIB libspdk_scheduler_gscheduler.a 00:04:31.973 CC module/bdev/delay/vbdev_delay.o 00:04:31.973 CC module/bdev/error/vbdev_error.o 00:04:31.973 CC module/bdev/gpt/gpt.o 00:04:31.973 CC module/bdev/malloc/bdev_malloc.o 00:04:31.973 CC module/bdev/lvol/vbdev_lvol.o 00:04:31.973 CC module/bdev/null/bdev_null.o 00:04:31.973 SO libspdk_scheduler_gscheduler.so.4.0 00:04:31.973 CC module/blobfs/bdev/blobfs_bdev.o 00:04:31.973 SYMLINK libspdk_scheduler_gscheduler.so 00:04:32.232 CC module/bdev/gpt/vbdev_gpt.o 00:04:32.232 LIB libspdk_keyring_linux.a 00:04:32.232 LIB libspdk_sock_posix.a 00:04:32.232 SO libspdk_keyring_linux.so.1.0 00:04:32.232 SO libspdk_sock_posix.so.6.0 00:04:32.232 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:32.232 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:32.232 SYMLINK libspdk_keyring_linux.so 00:04:32.232 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:32.232 SYMLINK libspdk_sock_posix.so 00:04:32.232 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:32.232 CC module/bdev/error/vbdev_error_rpc.o 00:04:32.232 CC module/bdev/null/bdev_null_rpc.o 00:04:32.232 LIB libspdk_blobfs_bdev.a 00:04:32.490 LIB libspdk_bdev_gpt.a 00:04:32.490 SO libspdk_blobfs_bdev.so.6.0 00:04:32.490 LIB libspdk_bdev_delay.a 00:04:32.490 SO libspdk_bdev_gpt.so.6.0 00:04:32.491 LIB libspdk_bdev_malloc.a 00:04:32.491 LIB libspdk_bdev_error.a 00:04:32.491 SYMLINK libspdk_blobfs_bdev.so 00:04:32.491 SO libspdk_bdev_delay.so.6.0 00:04:32.491 SO libspdk_bdev_malloc.so.6.0 00:04:32.491 LIB libspdk_bdev_null.a 00:04:32.491 SO libspdk_bdev_error.so.6.0 00:04:32.491 SYMLINK libspdk_bdev_gpt.so 00:04:32.491 SYMLINK libspdk_bdev_delay.so 00:04:32.491 SO libspdk_bdev_null.so.6.0 00:04:32.491 SYMLINK libspdk_bdev_malloc.so 00:04:32.491 CC module/bdev/nvme/bdev_nvme.o 00:04:32.491 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:32.491 SYMLINK libspdk_bdev_error.so 00:04:32.491 CC module/bdev/passthru/vbdev_passthru.o 00:04:32.491 LIB libspdk_bdev_lvol.a 00:04:32.491 SYMLINK libspdk_bdev_null.so 00:04:32.491 SO libspdk_bdev_lvol.so.6.0 00:04:32.749 CC module/bdev/raid/bdev_raid.o 00:04:32.749 CC module/bdev/split/vbdev_split.o 00:04:32.749 SYMLINK libspdk_bdev_lvol.so 00:04:32.749 CC module/bdev/split/vbdev_split_rpc.o 00:04:32.749 CC module/bdev/xnvme/bdev_xnvme.o 00:04:32.749 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:32.749 CC module/bdev/aio/bdev_aio.o 00:04:32.749 CC module/bdev/ftl/bdev_ftl.o 00:04:32.749 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:33.023 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:33.023 LIB libspdk_bdev_split.a 00:04:33.023 SO libspdk_bdev_split.so.6.0 00:04:33.023 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:33.023 SYMLINK libspdk_bdev_split.so 00:04:33.023 CC module/bdev/nvme/nvme_rpc.o 00:04:33.023 LIB libspdk_bdev_passthru.a 00:04:33.023 LIB libspdk_bdev_zone_block.a 00:04:33.023 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:33.023 CC module/bdev/aio/bdev_aio_rpc.o 00:04:33.023 SO libspdk_bdev_passthru.so.6.0 00:04:33.023 SO libspdk_bdev_zone_block.so.6.0 00:04:33.023 LIB libspdk_bdev_xnvme.a 00:04:33.288 SYMLINK libspdk_bdev_passthru.so 00:04:33.288 SO libspdk_bdev_xnvme.so.3.0 00:04:33.288 SYMLINK libspdk_bdev_zone_block.so 00:04:33.288 CC module/bdev/raid/bdev_raid_rpc.o 00:04:33.288 CC module/bdev/iscsi/bdev_iscsi.o 00:04:33.288 LIB libspdk_bdev_aio.a 00:04:33.288 CC module/bdev/nvme/bdev_mdns_client.o 00:04:33.288 SYMLINK libspdk_bdev_xnvme.so 00:04:33.288 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:33.288 CC module/bdev/nvme/vbdev_opal.o 00:04:33.288 SO libspdk_bdev_aio.so.6.0 00:04:33.288 LIB libspdk_bdev_ftl.a 00:04:33.288 SO libspdk_bdev_ftl.so.6.0 00:04:33.288 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:33.288 SYMLINK libspdk_bdev_aio.so 00:04:33.288 CC module/bdev/raid/bdev_raid_sb.o 00:04:33.288 SYMLINK libspdk_bdev_ftl.so 00:04:33.288 CC module/bdev/raid/raid0.o 00:04:33.288 CC module/bdev/raid/raid1.o 00:04:33.288 CC module/bdev/raid/concat.o 00:04:33.547 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:33.547 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:33.547 LIB libspdk_bdev_iscsi.a 00:04:33.547 SO libspdk_bdev_iscsi.so.6.0 00:04:33.547 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:33.547 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:33.547 SYMLINK libspdk_bdev_iscsi.so 00:04:33.806 LIB libspdk_bdev_raid.a 00:04:33.806 SO libspdk_bdev_raid.so.6.0 00:04:33.806 LIB libspdk_bdev_virtio.a 00:04:34.064 SYMLINK libspdk_bdev_raid.so 00:04:34.064 SO libspdk_bdev_virtio.so.6.0 00:04:34.064 SYMLINK libspdk_bdev_virtio.so 00:04:34.644 LIB libspdk_bdev_nvme.a 00:04:34.903 SO libspdk_bdev_nvme.so.7.0 00:04:34.903 SYMLINK libspdk_bdev_nvme.so 00:04:35.840 CC module/event/subsystems/vmd/vmd.o 00:04:35.840 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:35.840 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:35.840 CC module/event/subsystems/sock/sock.o 00:04:35.840 CC module/event/subsystems/iobuf/iobuf.o 00:04:35.840 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:35.840 CC module/event/subsystems/scheduler/scheduler.o 00:04:35.840 CC module/event/subsystems/keyring/keyring.o 00:04:35.840 LIB libspdk_event_vmd.a 00:04:35.840 LIB libspdk_event_scheduler.a 00:04:35.840 SO libspdk_event_vmd.so.6.0 00:04:35.840 LIB libspdk_event_vhost_blk.a 00:04:35.840 LIB libspdk_event_sock.a 00:04:35.840 LIB libspdk_event_iobuf.a 00:04:35.840 LIB libspdk_event_keyring.a 00:04:35.840 SO libspdk_event_scheduler.so.4.0 00:04:35.840 SO libspdk_event_sock.so.5.0 00:04:35.840 SO libspdk_event_vhost_blk.so.3.0 00:04:35.840 SO libspdk_event_keyring.so.1.0 00:04:35.840 SO libspdk_event_iobuf.so.3.0 00:04:35.840 SYMLINK libspdk_event_vmd.so 00:04:35.840 SYMLINK libspdk_event_scheduler.so 00:04:35.840 SYMLINK libspdk_event_sock.so 00:04:35.840 SYMLINK libspdk_event_keyring.so 00:04:35.840 SYMLINK libspdk_event_vhost_blk.so 00:04:35.840 SYMLINK libspdk_event_iobuf.so 00:04:36.408 CC module/event/subsystems/accel/accel.o 00:04:36.408 LIB libspdk_event_accel.a 00:04:36.667 SO libspdk_event_accel.so.6.0 00:04:36.667 SYMLINK libspdk_event_accel.so 00:04:36.926 CC module/event/subsystems/bdev/bdev.o 00:04:37.185 LIB libspdk_event_bdev.a 00:04:37.185 SO libspdk_event_bdev.so.6.0 00:04:37.444 SYMLINK libspdk_event_bdev.so 00:04:37.702 CC module/event/subsystems/scsi/scsi.o 00:04:37.702 CC module/event/subsystems/nbd/nbd.o 00:04:37.702 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:37.702 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:37.702 CC module/event/subsystems/ublk/ublk.o 00:04:37.702 LIB libspdk_event_nbd.a 00:04:37.962 LIB libspdk_event_scsi.a 00:04:37.962 LIB libspdk_event_ublk.a 00:04:37.962 SO libspdk_event_scsi.so.6.0 00:04:37.962 SO libspdk_event_nbd.so.6.0 00:04:37.962 SO libspdk_event_ublk.so.3.0 00:04:37.962 SYMLINK libspdk_event_scsi.so 00:04:37.962 SYMLINK libspdk_event_nbd.so 00:04:37.962 LIB libspdk_event_nvmf.a 00:04:37.962 SO libspdk_event_nvmf.so.6.0 00:04:37.962 SYMLINK libspdk_event_ublk.so 00:04:37.962 SYMLINK libspdk_event_nvmf.so 00:04:38.220 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:38.220 CC module/event/subsystems/iscsi/iscsi.o 00:04:38.478 LIB libspdk_event_vhost_scsi.a 00:04:38.478 LIB libspdk_event_iscsi.a 00:04:38.478 SO libspdk_event_vhost_scsi.so.3.0 00:04:38.478 SO libspdk_event_iscsi.so.6.0 00:04:38.478 SYMLINK libspdk_event_vhost_scsi.so 00:04:38.478 SYMLINK libspdk_event_iscsi.so 00:04:38.735 SO libspdk.so.6.0 00:04:38.735 SYMLINK libspdk.so 00:04:38.993 TEST_HEADER include/spdk/accel.h 00:04:38.993 CXX app/trace/trace.o 00:04:38.993 TEST_HEADER include/spdk/accel_module.h 00:04:38.993 TEST_HEADER include/spdk/assert.h 00:04:38.993 TEST_HEADER include/spdk/barrier.h 00:04:38.993 TEST_HEADER include/spdk/base64.h 00:04:39.251 TEST_HEADER include/spdk/bdev.h 00:04:39.251 TEST_HEADER include/spdk/bdev_module.h 00:04:39.251 TEST_HEADER include/spdk/bdev_zone.h 00:04:39.251 TEST_HEADER include/spdk/bit_array.h 00:04:39.251 TEST_HEADER include/spdk/bit_pool.h 00:04:39.251 TEST_HEADER include/spdk/blob_bdev.h 00:04:39.251 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:39.251 TEST_HEADER include/spdk/blobfs.h 00:04:39.251 TEST_HEADER include/spdk/blob.h 00:04:39.251 TEST_HEADER include/spdk/conf.h 00:04:39.251 TEST_HEADER include/spdk/config.h 00:04:39.251 TEST_HEADER include/spdk/cpuset.h 00:04:39.251 TEST_HEADER include/spdk/crc16.h 00:04:39.251 TEST_HEADER include/spdk/crc32.h 00:04:39.251 TEST_HEADER include/spdk/crc64.h 00:04:39.251 TEST_HEADER include/spdk/dif.h 00:04:39.251 TEST_HEADER include/spdk/dma.h 00:04:39.251 TEST_HEADER include/spdk/endian.h 00:04:39.251 TEST_HEADER include/spdk/env_dpdk.h 00:04:39.251 TEST_HEADER include/spdk/env.h 00:04:39.251 TEST_HEADER include/spdk/event.h 00:04:39.251 TEST_HEADER include/spdk/fd_group.h 00:04:39.251 TEST_HEADER include/spdk/fd.h 00:04:39.251 TEST_HEADER include/spdk/file.h 00:04:39.251 TEST_HEADER include/spdk/ftl.h 00:04:39.251 CC test/event/event_perf/event_perf.o 00:04:39.251 TEST_HEADER include/spdk/gpt_spec.h 00:04:39.251 TEST_HEADER include/spdk/hexlify.h 00:04:39.251 CC examples/accel/perf/accel_perf.o 00:04:39.251 TEST_HEADER include/spdk/histogram_data.h 00:04:39.251 TEST_HEADER include/spdk/idxd.h 00:04:39.251 TEST_HEADER include/spdk/idxd_spec.h 00:04:39.251 TEST_HEADER include/spdk/init.h 00:04:39.251 TEST_HEADER include/spdk/ioat.h 00:04:39.251 TEST_HEADER include/spdk/ioat_spec.h 00:04:39.251 TEST_HEADER include/spdk/iscsi_spec.h 00:04:39.251 TEST_HEADER include/spdk/json.h 00:04:39.251 TEST_HEADER include/spdk/jsonrpc.h 00:04:39.251 CC test/accel/dif/dif.o 00:04:39.251 TEST_HEADER include/spdk/keyring.h 00:04:39.251 TEST_HEADER include/spdk/keyring_module.h 00:04:39.251 CC test/app/bdev_svc/bdev_svc.o 00:04:39.251 TEST_HEADER include/spdk/likely.h 00:04:39.251 TEST_HEADER include/spdk/log.h 00:04:39.251 CC test/bdev/bdevio/bdevio.o 00:04:39.251 TEST_HEADER include/spdk/lvol.h 00:04:39.251 TEST_HEADER include/spdk/memory.h 00:04:39.251 CC test/dma/test_dma/test_dma.o 00:04:39.251 TEST_HEADER include/spdk/mmio.h 00:04:39.251 TEST_HEADER include/spdk/nbd.h 00:04:39.251 TEST_HEADER include/spdk/notify.h 00:04:39.251 CC test/blobfs/mkfs/mkfs.o 00:04:39.251 TEST_HEADER include/spdk/nvme.h 00:04:39.251 TEST_HEADER include/spdk/nvme_intel.h 00:04:39.251 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:39.251 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:39.251 TEST_HEADER include/spdk/nvme_spec.h 00:04:39.251 TEST_HEADER include/spdk/nvme_zns.h 00:04:39.251 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:39.251 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:39.251 TEST_HEADER include/spdk/nvmf.h 00:04:39.251 TEST_HEADER include/spdk/nvmf_spec.h 00:04:39.251 TEST_HEADER include/spdk/nvmf_transport.h 00:04:39.251 TEST_HEADER include/spdk/opal.h 00:04:39.251 TEST_HEADER include/spdk/opal_spec.h 00:04:39.251 TEST_HEADER include/spdk/pci_ids.h 00:04:39.251 TEST_HEADER include/spdk/pipe.h 00:04:39.251 TEST_HEADER include/spdk/queue.h 00:04:39.251 TEST_HEADER include/spdk/reduce.h 00:04:39.251 TEST_HEADER include/spdk/rpc.h 00:04:39.251 TEST_HEADER include/spdk/scheduler.h 00:04:39.251 TEST_HEADER include/spdk/scsi.h 00:04:39.251 CC test/env/mem_callbacks/mem_callbacks.o 00:04:39.251 TEST_HEADER include/spdk/scsi_spec.h 00:04:39.251 TEST_HEADER include/spdk/sock.h 00:04:39.251 TEST_HEADER include/spdk/stdinc.h 00:04:39.251 TEST_HEADER include/spdk/string.h 00:04:39.251 TEST_HEADER include/spdk/thread.h 00:04:39.251 TEST_HEADER include/spdk/trace.h 00:04:39.251 TEST_HEADER include/spdk/trace_parser.h 00:04:39.251 TEST_HEADER include/spdk/tree.h 00:04:39.251 TEST_HEADER include/spdk/ublk.h 00:04:39.251 TEST_HEADER include/spdk/util.h 00:04:39.251 TEST_HEADER include/spdk/uuid.h 00:04:39.251 TEST_HEADER include/spdk/version.h 00:04:39.251 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:39.251 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:39.251 TEST_HEADER include/spdk/vhost.h 00:04:39.251 TEST_HEADER include/spdk/vmd.h 00:04:39.251 TEST_HEADER include/spdk/xor.h 00:04:39.251 TEST_HEADER include/spdk/zipf.h 00:04:39.251 CXX test/cpp_headers/accel.o 00:04:39.251 LINK event_perf 00:04:39.251 LINK bdev_svc 00:04:39.510 LINK mkfs 00:04:39.510 LINK mem_callbacks 00:04:39.510 CXX test/cpp_headers/accel_module.o 00:04:39.510 LINK spdk_trace 00:04:39.510 CC test/event/reactor/reactor.o 00:04:39.510 LINK test_dma 00:04:39.510 LINK bdevio 00:04:39.510 CXX test/cpp_headers/assert.o 00:04:39.768 CC test/env/vtophys/vtophys.o 00:04:39.768 LINK dif 00:04:39.768 LINK reactor 00:04:39.768 LINK accel_perf 00:04:39.768 CC app/trace_record/trace_record.o 00:04:39.768 CC test/event/reactor_perf/reactor_perf.o 00:04:39.768 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:39.768 CXX test/cpp_headers/barrier.o 00:04:39.768 CXX test/cpp_headers/base64.o 00:04:39.768 CXX test/cpp_headers/bdev.o 00:04:39.768 LINK vtophys 00:04:39.768 CXX test/cpp_headers/bdev_module.o 00:04:39.768 LINK reactor_perf 00:04:40.027 LINK spdk_trace_record 00:04:40.027 CC test/event/app_repeat/app_repeat.o 00:04:40.027 CXX test/cpp_headers/bdev_zone.o 00:04:40.027 CC test/event/scheduler/scheduler.o 00:04:40.027 CC app/nvmf_tgt/nvmf_main.o 00:04:40.027 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:40.027 CC examples/bdev/hello_world/hello_bdev.o 00:04:40.027 CXX test/cpp_headers/bit_array.o 00:04:40.286 CC app/iscsi_tgt/iscsi_tgt.o 00:04:40.286 LINK nvme_fuzz 00:04:40.286 LINK app_repeat 00:04:40.286 CC app/spdk_tgt/spdk_tgt.o 00:04:40.286 LINK env_dpdk_post_init 00:04:40.286 LINK nvmf_tgt 00:04:40.286 LINK scheduler 00:04:40.286 CXX test/cpp_headers/bit_pool.o 00:04:40.286 CC app/spdk_lspci/spdk_lspci.o 00:04:40.286 LINK hello_bdev 00:04:40.286 LINK iscsi_tgt 00:04:40.286 LINK spdk_tgt 00:04:40.545 CC app/spdk_nvme_perf/perf.o 00:04:40.545 CXX test/cpp_headers/blob_bdev.o 00:04:40.545 LINK spdk_lspci 00:04:40.545 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:40.545 CXX test/cpp_headers/blobfs_bdev.o 00:04:40.545 CXX test/cpp_headers/blobfs.o 00:04:40.545 CC test/env/memory/memory_ut.o 00:04:40.545 CXX test/cpp_headers/blob.o 00:04:40.545 CXX test/cpp_headers/conf.o 00:04:40.545 CC examples/bdev/bdevperf/bdevperf.o 00:04:40.804 CC app/spdk_nvme_identify/identify.o 00:04:40.804 CC test/env/pci/pci_ut.o 00:04:40.804 CC app/spdk_nvme_discover/discovery_aer.o 00:04:40.804 CC app/spdk_top/spdk_top.o 00:04:40.804 CXX test/cpp_headers/config.o 00:04:40.804 CXX test/cpp_headers/cpuset.o 00:04:40.804 CC app/vhost/vhost.o 00:04:40.804 LINK spdk_nvme_discover 00:04:41.062 CXX test/cpp_headers/crc16.o 00:04:41.062 LINK vhost 00:04:41.062 CXX test/cpp_headers/crc32.o 00:04:41.062 LINK pci_ut 00:04:41.321 CC app/spdk_dd/spdk_dd.o 00:04:41.321 CXX test/cpp_headers/crc64.o 00:04:41.321 LINK memory_ut 00:04:41.321 LINK spdk_nvme_perf 00:04:41.321 CXX test/cpp_headers/dif.o 00:04:41.321 LINK bdevperf 00:04:41.321 CC app/fio/nvme/fio_plugin.o 00:04:41.579 CC app/fio/bdev/fio_plugin.o 00:04:41.579 CXX test/cpp_headers/dma.o 00:04:41.579 LINK spdk_nvme_identify 00:04:41.579 LINK spdk_dd 00:04:41.579 LINK spdk_top 00:04:41.579 CC test/lvol/esnap/esnap.o 00:04:41.579 CC test/nvme/aer/aer.o 00:04:41.837 CXX test/cpp_headers/endian.o 00:04:41.837 CC test/nvme/reset/reset.o 00:04:41.837 CC examples/blob/hello_world/hello_blob.o 00:04:41.837 CXX test/cpp_headers/env_dpdk.o 00:04:41.837 CC examples/blob/cli/blobcli.o 00:04:41.837 CC test/rpc_client/rpc_client_test.o 00:04:42.095 LINK spdk_bdev 00:04:42.095 LINK aer 00:04:42.095 CXX test/cpp_headers/env.o 00:04:42.095 LINK spdk_nvme 00:04:42.095 LINK reset 00:04:42.095 LINK hello_blob 00:04:42.095 LINK rpc_client_test 00:04:42.095 CXX test/cpp_headers/event.o 00:04:42.095 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:42.095 LINK iscsi_fuzz 00:04:42.353 CC test/app/histogram_perf/histogram_perf.o 00:04:42.353 CC test/thread/poller_perf/poller_perf.o 00:04:42.353 CC test/nvme/sgl/sgl.o 00:04:42.353 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:42.353 CC test/nvme/e2edp/nvme_dp.o 00:04:42.353 CC test/nvme/overhead/overhead.o 00:04:42.353 CXX test/cpp_headers/fd_group.o 00:04:42.353 LINK histogram_perf 00:04:42.353 LINK blobcli 00:04:42.353 CXX test/cpp_headers/fd.o 00:04:42.353 LINK poller_perf 00:04:42.610 LINK sgl 00:04:42.610 CC test/nvme/err_injection/err_injection.o 00:04:42.610 CXX test/cpp_headers/file.o 00:04:42.610 CC test/nvme/startup/startup.o 00:04:42.610 LINK nvme_dp 00:04:42.610 LINK overhead 00:04:42.610 CC test/nvme/reserve/reserve.o 00:04:42.610 LINK vhost_fuzz 00:04:42.610 CXX test/cpp_headers/ftl.o 00:04:42.868 CC examples/ioat/perf/perf.o 00:04:42.868 LINK err_injection 00:04:42.868 LINK startup 00:04:42.868 CC examples/ioat/verify/verify.o 00:04:42.868 CC test/nvme/simple_copy/simple_copy.o 00:04:42.868 CXX test/cpp_headers/gpt_spec.o 00:04:42.868 CC test/nvme/connect_stress/connect_stress.o 00:04:42.868 LINK reserve 00:04:42.868 LINK ioat_perf 00:04:42.868 CC test/app/jsoncat/jsoncat.o 00:04:43.126 CC test/nvme/boot_partition/boot_partition.o 00:04:43.126 CC test/nvme/compliance/nvme_compliance.o 00:04:43.126 LINK verify 00:04:43.126 CXX test/cpp_headers/hexlify.o 00:04:43.126 LINK connect_stress 00:04:43.126 LINK simple_copy 00:04:43.126 LINK jsoncat 00:04:43.126 LINK boot_partition 00:04:43.126 CC test/nvme/fused_ordering/fused_ordering.o 00:04:43.126 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:43.126 CXX test/cpp_headers/histogram_data.o 00:04:43.385 CC examples/nvme/hello_world/hello_world.o 00:04:43.385 CC examples/nvme/reconnect/reconnect.o 00:04:43.385 LINK nvme_compliance 00:04:43.385 CC test/nvme/fdp/fdp.o 00:04:43.385 LINK fused_ordering 00:04:43.385 CXX test/cpp_headers/idxd.o 00:04:43.385 LINK doorbell_aers 00:04:43.385 CC test/app/stub/stub.o 00:04:43.385 CC test/nvme/cuse/cuse.o 00:04:43.385 CXX test/cpp_headers/idxd_spec.o 00:04:43.643 CXX test/cpp_headers/init.o 00:04:43.643 LINK hello_world 00:04:43.643 LINK stub 00:04:43.643 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:43.643 LINK reconnect 00:04:43.643 CXX test/cpp_headers/ioat.o 00:04:43.643 LINK fdp 00:04:43.643 CC examples/sock/hello_world/hello_sock.o 00:04:43.643 CXX test/cpp_headers/ioat_spec.o 00:04:43.643 CC examples/nvme/arbitration/arbitration.o 00:04:43.901 CXX test/cpp_headers/iscsi_spec.o 00:04:43.901 CC examples/vmd/lsvmd/lsvmd.o 00:04:43.901 CC examples/vmd/led/led.o 00:04:43.901 CC examples/nvme/hotplug/hotplug.o 00:04:43.901 LINK hello_sock 00:04:43.901 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:43.901 CXX test/cpp_headers/json.o 00:04:44.160 LINK lsvmd 00:04:44.160 LINK arbitration 00:04:44.160 LINK led 00:04:44.160 LINK nvme_manage 00:04:44.160 LINK cmb_copy 00:04:44.160 CXX test/cpp_headers/jsonrpc.o 00:04:44.160 LINK hotplug 00:04:44.418 CC examples/nvmf/nvmf/nvmf.o 00:04:44.418 CXX test/cpp_headers/keyring.o 00:04:44.418 CXX test/cpp_headers/keyring_module.o 00:04:44.418 CC examples/nvme/abort/abort.o 00:04:44.418 CC examples/util/zipf/zipf.o 00:04:44.418 CXX test/cpp_headers/likely.o 00:04:44.418 CC examples/thread/thread/thread_ex.o 00:04:44.418 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:44.418 CXX test/cpp_headers/log.o 00:04:44.418 CXX test/cpp_headers/lvol.o 00:04:44.418 LINK zipf 00:04:44.677 LINK nvmf 00:04:44.677 LINK pmr_persistence 00:04:44.677 CC examples/idxd/perf/perf.o 00:04:44.677 CXX test/cpp_headers/memory.o 00:04:44.677 CXX test/cpp_headers/mmio.o 00:04:44.677 LINK thread 00:04:44.677 LINK abort 00:04:44.677 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:44.677 CXX test/cpp_headers/nbd.o 00:04:44.677 LINK cuse 00:04:44.677 CXX test/cpp_headers/notify.o 00:04:44.677 CXX test/cpp_headers/nvme.o 00:04:44.963 CXX test/cpp_headers/nvme_intel.o 00:04:44.963 CXX test/cpp_headers/nvme_ocssd.o 00:04:44.963 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:44.963 CXX test/cpp_headers/nvme_spec.o 00:04:44.963 LINK interrupt_tgt 00:04:44.963 LINK idxd_perf 00:04:44.963 CXX test/cpp_headers/nvme_zns.o 00:04:44.963 CXX test/cpp_headers/nvmf_cmd.o 00:04:44.963 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:44.963 CXX test/cpp_headers/nvmf.o 00:04:44.963 CXX test/cpp_headers/nvmf_spec.o 00:04:44.963 CXX test/cpp_headers/nvmf_transport.o 00:04:44.963 CXX test/cpp_headers/opal.o 00:04:45.222 CXX test/cpp_headers/opal_spec.o 00:04:45.222 CXX test/cpp_headers/pci_ids.o 00:04:45.222 CXX test/cpp_headers/pipe.o 00:04:45.222 CXX test/cpp_headers/queue.o 00:04:45.222 CXX test/cpp_headers/reduce.o 00:04:45.222 CXX test/cpp_headers/rpc.o 00:04:45.222 CXX test/cpp_headers/scheduler.o 00:04:45.222 CXX test/cpp_headers/scsi.o 00:04:45.222 CXX test/cpp_headers/scsi_spec.o 00:04:45.222 CXX test/cpp_headers/sock.o 00:04:45.222 CXX test/cpp_headers/stdinc.o 00:04:45.222 CXX test/cpp_headers/string.o 00:04:45.222 CXX test/cpp_headers/thread.o 00:04:45.222 CXX test/cpp_headers/trace.o 00:04:45.222 CXX test/cpp_headers/trace_parser.o 00:04:45.222 CXX test/cpp_headers/tree.o 00:04:45.481 CXX test/cpp_headers/ublk.o 00:04:45.481 CXX test/cpp_headers/util.o 00:04:45.481 CXX test/cpp_headers/uuid.o 00:04:45.481 CXX test/cpp_headers/version.o 00:04:45.481 CXX test/cpp_headers/vfio_user_pci.o 00:04:45.481 CXX test/cpp_headers/vfio_user_spec.o 00:04:45.481 CXX test/cpp_headers/vhost.o 00:04:45.481 CXX test/cpp_headers/vmd.o 00:04:45.481 CXX test/cpp_headers/xor.o 00:04:45.481 CXX test/cpp_headers/zipf.o 00:04:47.383 LINK esnap 00:04:47.642 ************************************ 00:04:47.642 END TEST make 00:04:47.642 ************************************ 00:04:47.642 00:04:47.642 real 0m54.109s 00:04:47.642 user 4m25.109s 00:04:47.642 sys 1m16.328s 00:04:47.642 15:40:22 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:47.642 15:40:22 make -- common/autotest_common.sh@10 -- $ set +x 00:04:47.642 15:40:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:47.642 15:40:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:47.642 15:40:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:47.642 15:40:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.642 15:40:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:47.642 15:40:22 -- pm/common@44 -- $ pid=5928 00:04:47.642 15:40:22 -- pm/common@50 -- $ kill -TERM 5928 00:04:47.642 15:40:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.642 15:40:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:47.642 15:40:22 -- pm/common@44 -- $ pid=5930 00:04:47.642 15:40:22 -- pm/common@50 -- $ kill -TERM 5930 00:04:47.642 15:40:22 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:47.642 15:40:22 -- nvmf/common.sh@7 -- # uname -s 00:04:47.642 15:40:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.642 15:40:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.642 15:40:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.642 15:40:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.642 15:40:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.642 15:40:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.642 15:40:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.642 15:40:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.642 15:40:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.642 15:40:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.642 15:40:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f2426c-824d-4da7-acc2-ef92aa575225 00:04:47.642 15:40:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=88f2426c-824d-4da7-acc2-ef92aa575225 00:04:47.642 15:40:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.642 15:40:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.642 15:40:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.642 15:40:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.642 15:40:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:47.901 15:40:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.901 15:40:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.901 15:40:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.901 15:40:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.901 15:40:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.901 15:40:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.901 15:40:22 -- paths/export.sh@5 -- # export PATH 00:04:47.901 15:40:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.901 15:40:22 -- nvmf/common.sh@47 -- # : 0 00:04:47.901 15:40:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:47.901 15:40:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:47.901 15:40:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.901 15:40:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.901 15:40:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.901 15:40:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:47.901 15:40:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:47.901 15:40:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:47.901 15:40:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:47.901 15:40:22 -- spdk/autotest.sh@32 -- # uname -s 00:04:47.901 15:40:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:47.901 15:40:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:47.901 15:40:22 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:47.901 15:40:22 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:47.901 15:40:22 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:47.901 15:40:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:47.901 15:40:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:47.901 15:40:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:47.901 15:40:22 -- spdk/autotest.sh@48 -- # udevadm_pid=65581 00:04:47.901 15:40:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:47.901 15:40:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:47.901 15:40:22 -- pm/common@17 -- # local monitor 00:04:47.901 15:40:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.901 15:40:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.901 15:40:22 -- pm/common@25 -- # sleep 1 00:04:47.901 15:40:22 -- pm/common@21 -- # date +%s 00:04:47.901 15:40:22 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721490022 00:04:47.901 15:40:22 -- pm/common@21 -- # date +%s 00:04:47.901 15:40:22 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721490022 00:04:47.901 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721490022_collect-cpu-load.pm.log 00:04:47.901 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721490022_collect-vmstat.pm.log 00:04:48.835 15:40:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:48.835 15:40:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:48.835 15:40:23 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:48.835 15:40:23 -- common/autotest_common.sh@10 -- # set +x 00:04:48.835 15:40:23 -- spdk/autotest.sh@59 -- # create_test_list 00:04:48.835 15:40:23 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:48.835 15:40:23 -- common/autotest_common.sh@10 -- # set +x 00:04:48.835 15:40:23 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:48.835 15:40:23 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:48.835 15:40:23 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:48.835 15:40:23 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:48.835 15:40:23 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:48.835 15:40:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:49.094 15:40:23 -- common/autotest_common.sh@1451 -- # uname 00:04:49.094 15:40:23 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:49.094 15:40:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:49.094 15:40:23 -- common/autotest_common.sh@1471 -- # uname 00:04:49.094 15:40:23 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:49.094 15:40:23 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:49.094 15:40:23 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:49.094 15:40:23 -- spdk/autotest.sh@72 -- # hash lcov 00:04:49.094 15:40:23 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:49.094 15:40:23 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:49.094 --rc lcov_branch_coverage=1 00:04:49.094 --rc lcov_function_coverage=1 00:04:49.094 --rc genhtml_branch_coverage=1 00:04:49.094 --rc genhtml_function_coverage=1 00:04:49.094 --rc genhtml_legend=1 00:04:49.094 --rc geninfo_all_blocks=1 00:04:49.094 ' 00:04:49.094 15:40:23 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:49.094 --rc lcov_branch_coverage=1 00:04:49.094 --rc lcov_function_coverage=1 00:04:49.094 --rc genhtml_branch_coverage=1 00:04:49.094 --rc genhtml_function_coverage=1 00:04:49.094 --rc genhtml_legend=1 00:04:49.094 --rc geninfo_all_blocks=1 00:04:49.094 ' 00:04:49.094 15:40:23 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:49.094 --rc lcov_branch_coverage=1 00:04:49.094 --rc lcov_function_coverage=1 00:04:49.094 --rc genhtml_branch_coverage=1 00:04:49.094 --rc genhtml_function_coverage=1 00:04:49.094 --rc genhtml_legend=1 00:04:49.094 --rc geninfo_all_blocks=1 00:04:49.094 --no-external' 00:04:49.094 15:40:23 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:49.094 --rc lcov_branch_coverage=1 00:04:49.094 --rc lcov_function_coverage=1 00:04:49.094 --rc genhtml_branch_coverage=1 00:04:49.094 --rc genhtml_function_coverage=1 00:04:49.094 --rc genhtml_legend=1 00:04:49.094 --rc geninfo_all_blocks=1 00:04:49.094 --no-external' 00:04:49.094 15:40:23 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:49.094 lcov: LCOV version 1.14 00:04:49.094 15:40:23 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:03.992 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:03.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:13.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:13.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:13.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:13.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:13.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:13.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:13.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:13.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:13.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:13.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:13.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:13.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:13.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:13.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:13.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:13.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:13.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:13.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:13.964 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:13.964 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:14.224 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:14.224 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:14.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:14.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:14.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:14.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:14.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:14.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:14.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:14.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:14.225 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:14.225 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:14.485 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:14.485 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:14.800 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:14.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:15.062 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:15.062 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:15.062 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:15.062 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:15.062 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:15.062 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:15.062 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:15.062 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:15.062 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:15.062 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:15.062 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:15.062 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:15.062 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:15.062 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:15.062 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:15.062 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:18.355 15:40:52 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:18.355 15:40:52 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:18.355 15:40:52 -- common/autotest_common.sh@10 -- # set +x 00:05:18.355 15:40:52 -- spdk/autotest.sh@91 -- # rm -f 00:05:18.355 15:40:52 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.613 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.178 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:19.178 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:19.436 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:19.436 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:19.436 15:40:54 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:19.436 15:40:54 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:19.436 15:40:54 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:19.436 15:40:54 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:19.436 15:40:54 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:19.436 15:40:54 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:19.436 15:40:54 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:19.436 15:40:54 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:19.436 15:40:54 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:19.436 15:40:54 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:19.436 15:40:54 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:19.436 15:40:54 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:19.436 15:40:54 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:19.436 15:40:54 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:19.436 15:40:54 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:19.436 15:40:54 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:19.436 15:40:54 -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:19.436 15:40:54 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:19.436 15:40:54 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:19.436 15:40:54 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:19.436 15:40:54 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:05:19.436 15:40:54 -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:05:19.436 15:40:54 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:19.436 15:40:54 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:19.436 15:40:54 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:19.436 15:40:54 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:05:19.436 15:40:54 -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:05:19.436 15:40:54 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:19.437 15:40:54 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:19.437 15:40:54 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:19.437 15:40:54 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:05:19.437 15:40:54 -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:05:19.437 15:40:54 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:19.437 15:40:54 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:19.437 15:40:54 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:19.437 15:40:54 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:19.437 15:40:54 -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:19.437 15:40:54 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:19.437 15:40:54 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:19.437 15:40:54 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:19.437 15:40:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:19.437 15:40:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:19.437 15:40:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:19.437 15:40:54 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:19.437 15:40:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:19.437 No valid GPT data, bailing 00:05:19.437 15:40:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:19.437 15:40:54 -- scripts/common.sh@391 -- # pt= 00:05:19.437 15:40:54 -- scripts/common.sh@392 -- # return 1 00:05:19.437 15:40:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:19.437 1+0 records in 00:05:19.437 1+0 records out 00:05:19.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201971 s, 51.9 MB/s 00:05:19.437 15:40:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:19.437 15:40:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:19.437 15:40:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:19.437 15:40:54 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:19.437 15:40:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:19.437 No valid GPT data, bailing 00:05:19.437 15:40:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:19.695 15:40:54 -- scripts/common.sh@391 -- # pt= 00:05:19.695 15:40:54 -- scripts/common.sh@392 -- # return 1 00:05:19.695 15:40:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:19.695 1+0 records in 00:05:19.695 1+0 records out 00:05:19.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619471 s, 169 MB/s 00:05:19.695 15:40:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:19.695 15:40:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:19.695 15:40:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:05:19.695 15:40:54 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:05:19.695 15:40:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:19.695 No valid GPT data, bailing 00:05:19.695 15:40:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:19.695 15:40:54 -- scripts/common.sh@391 -- # pt= 00:05:19.695 15:40:54 -- scripts/common.sh@392 -- # return 1 00:05:19.695 15:40:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:19.695 1+0 records in 00:05:19.695 1+0 records out 00:05:19.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00582076 s, 180 MB/s 00:05:19.695 15:40:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:19.695 15:40:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:19.695 15:40:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:05:19.695 15:40:54 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:05:19.695 15:40:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:19.695 No valid GPT data, bailing 00:05:19.695 15:40:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:19.695 15:40:54 -- scripts/common.sh@391 -- # pt= 00:05:19.695 15:40:54 -- scripts/common.sh@392 -- # return 1 00:05:19.695 15:40:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:19.695 1+0 records in 00:05:19.695 1+0 records out 00:05:19.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00588179 s, 178 MB/s 00:05:19.695 15:40:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:19.695 15:40:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:19.695 15:40:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:05:19.695 15:40:54 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:05:19.695 15:40:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:19.695 No valid GPT data, bailing 00:05:19.695 15:40:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:19.695 15:40:54 -- scripts/common.sh@391 -- # pt= 00:05:19.695 15:40:54 -- scripts/common.sh@392 -- # return 1 00:05:19.695 15:40:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:19.695 1+0 records in 00:05:19.695 1+0 records out 00:05:19.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00580392 s, 181 MB/s 00:05:19.695 15:40:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:19.695 15:40:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:19.695 15:40:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:05:19.695 15:40:54 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:05:19.695 15:40:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:19.953 No valid GPT data, bailing 00:05:19.953 15:40:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:19.953 15:40:54 -- scripts/common.sh@391 -- # pt= 00:05:19.953 15:40:54 -- scripts/common.sh@392 -- # return 1 00:05:19.953 15:40:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:19.953 1+0 records in 00:05:19.953 1+0 records out 00:05:19.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00586457 s, 179 MB/s 00:05:19.953 15:40:54 -- spdk/autotest.sh@118 -- # sync 00:05:19.953 15:40:54 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:19.953 15:40:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:19.953 15:40:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:23.241 15:40:57 -- spdk/autotest.sh@124 -- # uname -s 00:05:23.241 15:40:57 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:23.241 15:40:57 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:23.241 15:40:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.241 15:40:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.241 15:40:57 -- common/autotest_common.sh@10 -- # set +x 00:05:23.241 ************************************ 00:05:23.241 START TEST setup.sh 00:05:23.241 ************************************ 00:05:23.241 15:40:57 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:23.241 * Looking for test storage... 00:05:23.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:23.241 15:40:57 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:23.241 15:40:57 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:23.241 15:40:57 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:23.241 15:40:57 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.241 15:40:57 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.241 15:40:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:23.241 ************************************ 00:05:23.241 START TEST acl 00:05:23.241 ************************************ 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:23.241 * Looking for test storage... 00:05:23.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:23.241 15:40:57 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:23.241 15:40:57 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:23.241 15:40:57 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:23.241 15:40:57 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:23.241 15:40:57 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:23.241 15:40:57 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:23.241 15:40:57 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:23.241 15:40:57 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:23.241 15:40:57 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.620 15:40:59 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:24.620 15:40:59 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:24.620 15:40:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:24.620 15:40:59 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:24.620 15:40:59 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.620 15:40:59 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:25.186 15:40:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:25.186 15:40:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:25.186 15:40:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:25.754 Hugepages 00:05:25.754 node hugesize free / total 00:05:25.754 15:41:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:25.754 15:41:00 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:25.754 15:41:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:25.754 00:05:25.754 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:25.754 15:41:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:25.754 15:41:00 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:25.754 15:41:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:26.013 15:41:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:26.013 15:41:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:26.013 15:41:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:26.013 15:41:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:26.013 15:41:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:26.013 15:41:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:26.013 15:41:00 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:26.013 15:41:00 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:26.013 15:41:00 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:26.013 15:41:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:26.272 15:41:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:26.272 15:41:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:26.272 15:41:00 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:26.272 15:41:00 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:26.272 15:41:00 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:26.272 15:41:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:26.272 15:41:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:05:26.272 15:41:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:26.272 15:41:00 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:26.272 15:41:00 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:26.272 15:41:00 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:26.272 15:41:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:26.272 15:41:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:05:26.272 15:41:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:26.272 15:41:01 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:26.272 15:41:01 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:26.272 15:41:01 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:26.272 15:41:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:26.272 15:41:01 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:05:26.272 15:41:01 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:26.272 15:41:01 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.272 15:41:01 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.272 15:41:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:26.272 ************************************ 00:05:26.272 START TEST denied 00:05:26.272 ************************************ 00:05:26.272 15:41:01 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:26.272 15:41:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:26.272 15:41:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:26.272 15:41:01 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:26.272 15:41:01 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.272 15:41:01 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.175 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:28.175 15:41:02 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:28.175 15:41:02 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:28.175 15:41:02 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:28.175 15:41:02 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:28.175 15:41:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:28.175 15:41:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:28.175 15:41:02 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:28.175 15:41:02 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:28.175 15:41:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:28.175 15:41:02 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:34.756 00:05:34.756 real 0m7.920s 00:05:34.756 user 0m0.983s 00:05:34.756 sys 0m2.021s 00:05:34.756 15:41:08 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.756 15:41:08 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:34.756 ************************************ 00:05:34.756 END TEST denied 00:05:34.756 ************************************ 00:05:34.756 15:41:09 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:34.756 15:41:09 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.756 15:41:09 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.756 15:41:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:34.756 ************************************ 00:05:34.756 START TEST allowed 00:05:34.756 ************************************ 00:05:34.756 15:41:09 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:34.756 15:41:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:34.756 15:41:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:34.756 15:41:09 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.756 15:41:09 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:34.756 15:41:09 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:35.695 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:35.695 15:41:10 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.606 00:05:37.606 real 0m2.845s 00:05:37.606 user 0m1.164s 00:05:37.606 sys 0m1.697s 00:05:37.606 15:41:11 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.606 ************************************ 00:05:37.606 END TEST allowed 00:05:37.606 15:41:11 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:37.606 ************************************ 00:05:37.606 00:05:37.606 real 0m14.290s 00:05:37.606 user 0m3.539s 00:05:37.606 sys 0m5.857s 00:05:37.606 15:41:11 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.606 15:41:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:37.606 ************************************ 00:05:37.606 END TEST acl 00:05:37.606 ************************************ 00:05:37.606 15:41:12 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:37.606 15:41:12 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.606 15:41:12 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.606 15:41:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:37.606 ************************************ 00:05:37.606 START TEST hugepages 00:05:37.606 ************************************ 00:05:37.606 15:41:12 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:37.606 * Looking for test storage... 00:05:37.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 4680744 kB' 'MemAvailable: 7369108 kB' 'Buffers: 2436 kB' 'Cached: 2892600 kB' 'SwapCached: 0 kB' 'Active: 450456 kB' 'Inactive: 2552496 kB' 'Active(anon): 118432 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552496 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 109768 kB' 'Mapped: 48696 kB' 'Shmem: 10516 kB' 'KReclaimable: 82016 kB' 'Slab: 163824 kB' 'SReclaimable: 82016 kB' 'SUnreclaim: 81808 kB' 'KernelStack: 6556 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 324204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.606 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:37.607 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:37.608 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:37.608 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:37.608 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:37.608 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:37.608 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:37.608 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:37.608 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:37.608 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:37.608 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:37.608 15:41:12 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.608 15:41:12 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.608 15:41:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:37.608 ************************************ 00:05:37.608 START TEST default_setup 00:05:37.608 ************************************ 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.608 15:41:12 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:38.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.116 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.116 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.116 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.116 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6819528 kB' 'MemAvailable: 9507656 kB' 'Buffers: 2436 kB' 'Cached: 2892592 kB' 'SwapCached: 0 kB' 'Active: 461924 kB' 'Inactive: 2552520 kB' 'Active(anon): 129900 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 120984 kB' 'Mapped: 48840 kB' 'Shmem: 10476 kB' 'KReclaimable: 81492 kB' 'Slab: 162992 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81500 kB' 'KernelStack: 6528 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.116 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.117 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6819276 kB' 'MemAvailable: 9507404 kB' 'Buffers: 2436 kB' 'Cached: 2892592 kB' 'SwapCached: 0 kB' 'Active: 461324 kB' 'Inactive: 2552520 kB' 'Active(anon): 129300 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 120444 kB' 'Mapped: 48708 kB' 'Shmem: 10476 kB' 'KReclaimable: 81492 kB' 'Slab: 162992 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81500 kB' 'KernelStack: 6560 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.118 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:39.119 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6819968 kB' 'MemAvailable: 9508096 kB' 'Buffers: 2436 kB' 'Cached: 2892592 kB' 'SwapCached: 0 kB' 'Active: 461392 kB' 'Inactive: 2552520 kB' 'Active(anon): 129368 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 120464 kB' 'Mapped: 48708 kB' 'Shmem: 10476 kB' 'KReclaimable: 81492 kB' 'Slab: 162992 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81500 kB' 'KernelStack: 6560 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.120 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:39.121 nr_hugepages=1024 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:39.121 resv_hugepages=0 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:39.121 surplus_hugepages=0 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:39.121 anon_hugepages=0 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:39.121 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6819968 kB' 'MemAvailable: 9508096 kB' 'Buffers: 2436 kB' 'Cached: 2892592 kB' 'SwapCached: 0 kB' 'Active: 461524 kB' 'Inactive: 2552520 kB' 'Active(anon): 129500 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 120596 kB' 'Mapped: 48708 kB' 'Shmem: 10476 kB' 'KReclaimable: 81492 kB' 'Slab: 162992 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81500 kB' 'KernelStack: 6544 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.122 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.382 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6819968 kB' 'MemUsed: 5422004 kB' 'SwapCached: 0 kB' 'Active: 461480 kB' 'Inactive: 2552520 kB' 'Active(anon): 129456 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 2895028 kB' 'Mapped: 48708 kB' 'AnonPages: 120548 kB' 'Shmem: 10476 kB' 'KernelStack: 6544 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81492 kB' 'Slab: 162992 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:39.383 node0=1024 expecting 1024 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:39.383 00:05:39.383 real 0m1.716s 00:05:39.383 user 0m0.679s 00:05:39.383 sys 0m1.043s 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.383 15:41:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:39.383 ************************************ 00:05:39.383 END TEST default_setup 00:05:39.383 ************************************ 00:05:39.383 15:41:13 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:39.383 15:41:13 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.383 15:41:13 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.383 15:41:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:39.383 ************************************ 00:05:39.383 START TEST per_node_1G_alloc 00:05:39.383 ************************************ 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:39.383 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:39.384 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:39.384 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.384 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.950 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.215 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.215 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.215 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.215 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.215 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:40.215 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:40.215 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:40.215 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:40.215 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:40.215 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:40.215 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:40.215 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:40.215 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7863948 kB' 'MemAvailable: 10552080 kB' 'Buffers: 2436 kB' 'Cached: 2892588 kB' 'SwapCached: 0 kB' 'Active: 462060 kB' 'Inactive: 2552524 kB' 'Active(anon): 130036 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120996 kB' 'Mapped: 48812 kB' 'Shmem: 10476 kB' 'KReclaimable: 81492 kB' 'Slab: 163036 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81544 kB' 'KernelStack: 6628 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 339900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.216 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7863948 kB' 'MemAvailable: 10552080 kB' 'Buffers: 2436 kB' 'Cached: 2892588 kB' 'SwapCached: 0 kB' 'Active: 461832 kB' 'Inactive: 2552524 kB' 'Active(anon): 129808 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121060 kB' 'Mapped: 48708 kB' 'Shmem: 10476 kB' 'KReclaimable: 81492 kB' 'Slab: 163024 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81532 kB' 'KernelStack: 6612 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 339900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.217 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.218 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7864084 kB' 'MemAvailable: 10552220 kB' 'Buffers: 2436 kB' 'Cached: 2892592 kB' 'SwapCached: 0 kB' 'Active: 461572 kB' 'Inactive: 2552528 kB' 'Active(anon): 129548 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120688 kB' 'Mapped: 48708 kB' 'Shmem: 10476 kB' 'KReclaimable: 81492 kB' 'Slab: 162984 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81492 kB' 'KernelStack: 6544 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 339900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55236 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.219 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.220 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:40.221 nr_hugepages=512 00:05:40.221 resv_hugepages=0 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:40.221 surplus_hugepages=0 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:40.221 anon_hugepages=0 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7864084 kB' 'MemAvailable: 10552220 kB' 'Buffers: 2436 kB' 'Cached: 2892592 kB' 'SwapCached: 0 kB' 'Active: 461780 kB' 'Inactive: 2552528 kB' 'Active(anon): 129756 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120900 kB' 'Mapped: 48708 kB' 'Shmem: 10476 kB' 'KReclaimable: 81492 kB' 'Slab: 162980 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81488 kB' 'KernelStack: 6544 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 339900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55236 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.221 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.222 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7864708 kB' 'MemUsed: 4377264 kB' 'SwapCached: 0 kB' 'Active: 461536 kB' 'Inactive: 2552528 kB' 'Active(anon): 129512 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 2895028 kB' 'Mapped: 48708 kB' 'AnonPages: 120656 kB' 'Shmem: 10476 kB' 'KernelStack: 6544 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81492 kB' 'Slab: 162976 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.223 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.224 node0=512 expecting 512 00:05:40.224 ************************************ 00:05:40.224 END TEST per_node_1G_alloc 00:05:40.224 ************************************ 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:40.224 00:05:40.224 real 0m0.992s 00:05:40.224 user 0m0.418s 00:05:40.224 sys 0m0.631s 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.224 15:41:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:40.484 15:41:15 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:40.484 15:41:15 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.484 15:41:15 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.484 15:41:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:40.484 ************************************ 00:05:40.484 START TEST even_2G_alloc 00:05:40.484 ************************************ 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:40.484 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:40.485 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:40.485 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:40.485 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:40.485 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:40.485 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.485 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.054 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.318 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.318 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.318 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.318 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.318 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:41.318 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:41.318 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:41.318 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:41.318 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:41.318 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6810264 kB' 'MemAvailable: 9498400 kB' 'Buffers: 2436 kB' 'Cached: 2892592 kB' 'SwapCached: 0 kB' 'Active: 461680 kB' 'Inactive: 2552528 kB' 'Active(anon): 129656 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120788 kB' 'Mapped: 48832 kB' 'Shmem: 10476 kB' 'KReclaimable: 81492 kB' 'Slab: 162992 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81500 kB' 'KernelStack: 6564 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.319 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6810516 kB' 'MemAvailable: 9498652 kB' 'Buffers: 2436 kB' 'Cached: 2892592 kB' 'SwapCached: 0 kB' 'Active: 461536 kB' 'Inactive: 2552528 kB' 'Active(anon): 129512 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 120640 kB' 'Mapped: 48708 kB' 'Shmem: 10476 kB' 'KReclaimable: 81492 kB' 'Slab: 162996 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81504 kB' 'KernelStack: 6560 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.320 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.321 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6810516 kB' 'MemAvailable: 9498652 kB' 'Buffers: 2436 kB' 'Cached: 2892592 kB' 'SwapCached: 0 kB' 'Active: 461440 kB' 'Inactive: 2552528 kB' 'Active(anon): 129416 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 120520 kB' 'Mapped: 48708 kB' 'Shmem: 10476 kB' 'KReclaimable: 81492 kB' 'Slab: 162996 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81504 kB' 'KernelStack: 6544 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55268 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.322 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.323 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.324 15:41:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:41.324 nr_hugepages=1024 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:41.324 resv_hugepages=0 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:41.324 surplus_hugepages=0 00:05:41.324 anon_hugepages=0 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6810516 kB' 'MemAvailable: 9498652 kB' 'Buffers: 2436 kB' 'Cached: 2892592 kB' 'SwapCached: 0 kB' 'Active: 461664 kB' 'Inactive: 2552528 kB' 'Active(anon): 129640 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 120772 kB' 'Mapped: 48708 kB' 'Shmem: 10476 kB' 'KReclaimable: 81492 kB' 'Slab: 162996 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81504 kB' 'KernelStack: 6560 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 340016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55252 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.324 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.325 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6810012 kB' 'MemUsed: 5431960 kB' 'SwapCached: 0 kB' 'Active: 461920 kB' 'Inactive: 2552524 kB' 'Active(anon): 129896 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 2895024 kB' 'Mapped: 48708 kB' 'AnonPages: 121032 kB' 'Shmem: 10476 kB' 'KernelStack: 6544 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81492 kB' 'Slab: 162980 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.326 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.327 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:41.343 node0=1024 expecting 1024 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:41.343 00:05:41.343 real 0m1.005s 00:05:41.343 user 0m0.412s 00:05:41.343 sys 0m0.636s 00:05:41.343 ************************************ 00:05:41.343 END TEST even_2G_alloc 00:05:41.343 ************************************ 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.343 15:41:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:41.603 15:41:16 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:41.603 15:41:16 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.603 15:41:16 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.603 15:41:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:41.603 ************************************ 00:05:41.603 START TEST odd_alloc 00:05:41.603 ************************************ 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.603 15:41:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.172 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.172 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.172 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.172 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.172 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.438 15:41:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6805716 kB' 'MemAvailable: 9493856 kB' 'Buffers: 2436 kB' 'Cached: 2892596 kB' 'SwapCached: 0 kB' 'Active: 459264 kB' 'Inactive: 2552532 kB' 'Active(anon): 127240 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118120 kB' 'Mapped: 48296 kB' 'Shmem: 10476 kB' 'KReclaimable: 81492 kB' 'Slab: 163020 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81528 kB' 'KernelStack: 6592 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 326100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55284 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.438 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6806292 kB' 'MemAvailable: 9494432 kB' 'Buffers: 2436 kB' 'Cached: 2892596 kB' 'SwapCached: 0 kB' 'Active: 458616 kB' 'Inactive: 2552532 kB' 'Active(anon): 126592 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 117436 kB' 'Mapped: 47972 kB' 'Shmem: 10476 kB' 'KReclaimable: 81492 kB' 'Slab: 162936 kB' 'SReclaimable: 81492 kB' 'SUnreclaim: 81444 kB' 'KernelStack: 6528 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 326100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.439 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.440 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6806292 kB' 'MemAvailable: 9494424 kB' 'Buffers: 2436 kB' 'Cached: 2892596 kB' 'SwapCached: 0 kB' 'Active: 458604 kB' 'Inactive: 2552532 kB' 'Active(anon): 126580 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 117728 kB' 'Mapped: 48752 kB' 'Shmem: 10476 kB' 'KReclaimable: 81476 kB' 'Slab: 162920 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81444 kB' 'KernelStack: 6480 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 328664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.441 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.442 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:42.443 nr_hugepages=1025 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:42.443 resv_hugepages=0 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:42.443 surplus_hugepages=0 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:42.443 anon_hugepages=0 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6806292 kB' 'MemAvailable: 9494420 kB' 'Buffers: 2436 kB' 'Cached: 2892592 kB' 'SwapCached: 0 kB' 'Active: 458396 kB' 'Inactive: 2552528 kB' 'Active(anon): 126372 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 117284 kB' 'Mapped: 47972 kB' 'Shmem: 10476 kB' 'KReclaimable: 81476 kB' 'Slab: 162868 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81392 kB' 'KernelStack: 6416 kB' 'PageTables: 3524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 326100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.443 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.444 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6806292 kB' 'MemUsed: 5435680 kB' 'SwapCached: 0 kB' 'Active: 458304 kB' 'Inactive: 2552536 kB' 'Active(anon): 126280 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 2895036 kB' 'Mapped: 47972 kB' 'AnonPages: 117452 kB' 'Shmem: 10476 kB' 'KernelStack: 6464 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81476 kB' 'Slab: 162856 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.445 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:42.446 node0=1025 expecting 1025 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:42.446 00:05:42.446 real 0m0.999s 00:05:42.446 user 0m0.434s 00:05:42.446 sys 0m0.631s 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.446 15:41:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:42.446 ************************************ 00:05:42.446 END TEST odd_alloc 00:05:42.446 ************************************ 00:05:42.446 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:42.446 15:41:17 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.446 15:41:17 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.446 15:41:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:42.446 ************************************ 00:05:42.446 START TEST custom_alloc 00:05:42.446 ************************************ 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:42.446 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:42.447 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:42.447 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:42.706 15:41:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:42.706 15:41:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.706 15:41:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.277 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.277 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.277 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.277 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.277 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7861584 kB' 'MemAvailable: 10549720 kB' 'Buffers: 2436 kB' 'Cached: 2892600 kB' 'SwapCached: 0 kB' 'Active: 458408 kB' 'Inactive: 2552536 kB' 'Active(anon): 126384 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 117728 kB' 'Mapped: 48084 kB' 'Shmem: 10476 kB' 'KReclaimable: 81476 kB' 'Slab: 162844 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81368 kB' 'KernelStack: 6480 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 326100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.277 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.278 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.543 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7861892 kB' 'MemAvailable: 10550028 kB' 'Buffers: 2436 kB' 'Cached: 2892600 kB' 'SwapCached: 0 kB' 'Active: 458296 kB' 'Inactive: 2552536 kB' 'Active(anon): 126272 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 117412 kB' 'Mapped: 47972 kB' 'Shmem: 10476 kB' 'KReclaimable: 81476 kB' 'Slab: 162836 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81360 kB' 'KernelStack: 6496 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 325868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.544 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7862152 kB' 'MemAvailable: 10550280 kB' 'Buffers: 2436 kB' 'Cached: 2892592 kB' 'SwapCached: 0 kB' 'Active: 458220 kB' 'Inactive: 2552528 kB' 'Active(anon): 126196 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 117360 kB' 'Mapped: 47972 kB' 'Shmem: 10476 kB' 'KReclaimable: 81476 kB' 'Slab: 162836 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81360 kB' 'KernelStack: 6464 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 326100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.545 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.546 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:43.547 nr_hugepages=512 00:05:43.547 resv_hugepages=0 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:43.547 surplus_hugepages=0 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:43.547 anon_hugepages=0 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.547 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7862228 kB' 'MemAvailable: 10550360 kB' 'Buffers: 2436 kB' 'Cached: 2892596 kB' 'SwapCached: 0 kB' 'Active: 458164 kB' 'Inactive: 2552532 kB' 'Active(anon): 126140 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 117260 kB' 'Mapped: 47972 kB' 'Shmem: 10476 kB' 'KReclaimable: 81476 kB' 'Slab: 162836 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81360 kB' 'KernelStack: 6480 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 326100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.548 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7862228 kB' 'MemUsed: 4379744 kB' 'SwapCached: 0 kB' 'Active: 458164 kB' 'Inactive: 2552532 kB' 'Active(anon): 126140 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2895032 kB' 'Mapped: 47972 kB' 'AnonPages: 117260 kB' 'Shmem: 10476 kB' 'KernelStack: 6480 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81476 kB' 'Slab: 162836 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.549 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:43.550 node0=512 expecting 512 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:43.550 00:05:43.550 real 0m0.956s 00:05:43.550 user 0m0.416s 00:05:43.550 sys 0m0.607s 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.550 15:41:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:43.550 ************************************ 00:05:43.550 END TEST custom_alloc 00:05:43.550 ************************************ 00:05:43.550 15:41:18 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:43.550 15:41:18 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.550 15:41:18 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.550 15:41:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:43.550 ************************************ 00:05:43.550 START TEST no_shrink_alloc 00:05:43.551 ************************************ 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.551 15:41:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.120 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.383 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.383 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.383 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.383 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.383 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:44.383 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:44.383 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:44.383 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:44.383 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:44.383 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6820988 kB' 'MemAvailable: 9509120 kB' 'Buffers: 2436 kB' 'Cached: 2892596 kB' 'SwapCached: 0 kB' 'Active: 458432 kB' 'Inactive: 2552532 kB' 'Active(anon): 126408 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 117536 kB' 'Mapped: 48100 kB' 'Shmem: 10476 kB' 'KReclaimable: 81476 kB' 'Slab: 162824 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81348 kB' 'KernelStack: 6496 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 326100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.384 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6820488 kB' 'MemAvailable: 9508620 kB' 'Buffers: 2436 kB' 'Cached: 2892596 kB' 'SwapCached: 0 kB' 'Active: 458332 kB' 'Inactive: 2552532 kB' 'Active(anon): 126308 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 117800 kB' 'Mapped: 48100 kB' 'Shmem: 10476 kB' 'KReclaimable: 81476 kB' 'Slab: 162820 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81344 kB' 'KernelStack: 6528 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 328748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55188 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.385 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.386 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6819984 kB' 'MemAvailable: 9508112 kB' 'Buffers: 2436 kB' 'Cached: 2892592 kB' 'SwapCached: 0 kB' 'Active: 458236 kB' 'Inactive: 2552528 kB' 'Active(anon): 126212 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 117708 kB' 'Mapped: 48100 kB' 'Shmem: 10476 kB' 'KReclaimable: 81476 kB' 'Slab: 162816 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81340 kB' 'KernelStack: 6496 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 326100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.387 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.388 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:44.389 nr_hugepages=1024 00:05:44.389 resv_hugepages=0 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:44.389 surplus_hugepages=0 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:44.389 anon_hugepages=0 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6819984 kB' 'MemAvailable: 9508116 kB' 'Buffers: 2436 kB' 'Cached: 2892596 kB' 'SwapCached: 0 kB' 'Active: 458236 kB' 'Inactive: 2552532 kB' 'Active(anon): 126212 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 117636 kB' 'Mapped: 47968 kB' 'Shmem: 10476 kB' 'KReclaimable: 81476 kB' 'Slab: 162812 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81336 kB' 'KernelStack: 6432 kB' 'PageTables: 3568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 326100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.389 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.674 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.675 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6819732 kB' 'MemUsed: 5422240 kB' 'SwapCached: 0 kB' 'Active: 458092 kB' 'Inactive: 2552532 kB' 'Active(anon): 126068 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2895032 kB' 'Mapped: 47968 kB' 'AnonPages: 117496 kB' 'Shmem: 10476 kB' 'KernelStack: 6464 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81476 kB' 'Slab: 162812 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.676 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:44.677 node0=1024 expecting 1024 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.677 15:41:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.248 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.248 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.248 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.248 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.248 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.248 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6819712 kB' 'MemAvailable: 9507848 kB' 'Buffers: 2436 kB' 'Cached: 2892600 kB' 'SwapCached: 0 kB' 'Active: 458340 kB' 'Inactive: 2552536 kB' 'Active(anon): 126316 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 117736 kB' 'Mapped: 48152 kB' 'Shmem: 10476 kB' 'KReclaimable: 81476 kB' 'Slab: 162840 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81364 kB' 'KernelStack: 6520 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 326232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.516 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.517 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6820240 kB' 'MemAvailable: 9508376 kB' 'Buffers: 2436 kB' 'Cached: 2892600 kB' 'SwapCached: 0 kB' 'Active: 458400 kB' 'Inactive: 2552536 kB' 'Active(anon): 126376 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 117552 kB' 'Mapped: 48028 kB' 'Shmem: 10476 kB' 'KReclaimable: 81476 kB' 'Slab: 162868 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81392 kB' 'KernelStack: 6528 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 326232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.518 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.519 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6820240 kB' 'MemAvailable: 9508376 kB' 'Buffers: 2436 kB' 'Cached: 2892600 kB' 'SwapCached: 0 kB' 'Active: 458684 kB' 'Inactive: 2552536 kB' 'Active(anon): 126660 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 117572 kB' 'Mapped: 48028 kB' 'Shmem: 10476 kB' 'KReclaimable: 81476 kB' 'Slab: 162860 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81384 kB' 'KernelStack: 6528 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 326184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.520 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.521 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.522 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:45.523 nr_hugepages=1024 00:05:45.523 resv_hugepages=0 00:05:45.523 surplus_hugepages=0 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:45.523 anon_hugepages=0 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6820240 kB' 'MemAvailable: 9508376 kB' 'Buffers: 2436 kB' 'Cached: 2892600 kB' 'SwapCached: 0 kB' 'Active: 458720 kB' 'Inactive: 2552536 kB' 'Active(anon): 126696 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552536 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 117508 kB' 'Mapped: 48088 kB' 'Shmem: 10476 kB' 'KReclaimable: 81476 kB' 'Slab: 162860 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81384 kB' 'KernelStack: 6512 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 326232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 4001792 kB' 'DirectMap1G: 10485760 kB' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.523 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.524 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6820240 kB' 'MemUsed: 5421732 kB' 'SwapCached: 0 kB' 'Active: 458224 kB' 'Inactive: 2552532 kB' 'Active(anon): 126200 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2552532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 2895032 kB' 'Mapped: 48028 kB' 'AnonPages: 117012 kB' 'Shmem: 10476 kB' 'KernelStack: 6496 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81476 kB' 'Slab: 162852 kB' 'SReclaimable: 81476 kB' 'SUnreclaim: 81376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.525 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:45.526 node0=1024 expecting 1024 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:45.526 15:41:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:45.527 00:05:45.527 real 0m1.946s 00:05:45.527 user 0m0.794s 00:05:45.527 sys 0m1.268s 00:05:45.527 15:41:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.527 ************************************ 00:05:45.527 END TEST no_shrink_alloc 00:05:45.527 ************************************ 00:05:45.527 15:41:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:45.527 15:41:20 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:45.527 15:41:20 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:45.527 15:41:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:45.527 15:41:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:45.527 15:41:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:45.527 15:41:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:45.527 15:41:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:45.527 15:41:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:45.527 15:41:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:45.527 00:05:45.527 real 0m8.231s 00:05:45.527 user 0m3.366s 00:05:45.527 sys 0m5.206s 00:05:45.527 15:41:20 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.527 15:41:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:45.527 ************************************ 00:05:45.527 END TEST hugepages 00:05:45.527 ************************************ 00:05:45.786 15:41:20 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:45.786 15:41:20 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.786 15:41:20 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.786 15:41:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:45.786 ************************************ 00:05:45.786 START TEST driver 00:05:45.786 ************************************ 00:05:45.786 15:41:20 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:45.786 * Looking for test storage... 00:05:45.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:45.786 15:41:20 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:45.786 15:41:20 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:45.786 15:41:20 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:52.352 15:41:26 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:52.352 15:41:26 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.352 15:41:26 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.352 15:41:26 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:52.352 ************************************ 00:05:52.352 START TEST guess_driver 00:05:52.352 ************************************ 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:52.352 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:52.352 Looking for driver=uio_pci_generic 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:52.352 15:41:26 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:52.353 15:41:26 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.353 15:41:26 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:52.921 15:41:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:52.921 15:41:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:52.921 15:41:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:53.490 15:41:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:53.490 15:41:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:53.490 15:41:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:53.490 15:41:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:53.490 15:41:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:53.490 15:41:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:53.490 15:41:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:53.490 15:41:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:53.490 15:41:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:53.749 15:41:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:53.749 15:41:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:53.749 15:41:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:53.749 15:41:28 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:53.749 15:41:28 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:53.749 15:41:28 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:53.750 15:41:28 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:00.322 00:06:00.322 real 0m7.907s 00:06:00.322 user 0m0.945s 00:06:00.322 sys 0m2.111s 00:06:00.322 15:41:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.322 15:41:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:00.322 ************************************ 00:06:00.322 END TEST guess_driver 00:06:00.322 ************************************ 00:06:00.322 00:06:00.322 real 0m14.409s 00:06:00.322 user 0m1.449s 00:06:00.322 sys 0m3.270s 00:06:00.322 15:41:34 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.322 15:41:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:00.322 ************************************ 00:06:00.322 END TEST driver 00:06:00.322 ************************************ 00:06:00.322 15:41:34 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:00.322 15:41:34 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.322 15:41:34 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.322 15:41:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:00.322 ************************************ 00:06:00.322 START TEST devices 00:06:00.322 ************************************ 00:06:00.322 15:41:34 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:00.322 * Looking for test storage... 00:06:00.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:00.322 15:41:34 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:00.322 15:41:34 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:00.322 15:41:34 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:00.322 15:41:34 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:01.709 15:41:36 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:01.709 15:41:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:01.709 15:41:36 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:01.709 No valid GPT data, bailing 00:06:01.709 15:41:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:01.709 15:41:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:01.709 15:41:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:01.709 15:41:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:01.709 15:41:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:01.709 15:41:36 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:01.709 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:06:01.709 15:41:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:06:01.709 15:41:36 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:06:01.968 No valid GPT data, bailing 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:06:01.968 15:41:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:06:01.968 15:41:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:06:01.968 15:41:36 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:06:01.968 No valid GPT data, bailing 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:06:01.968 15:41:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:06:01.968 15:41:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:06:01.968 15:41:36 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:06:01.968 No valid GPT data, bailing 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:06:01.968 15:41:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:06:01.968 15:41:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:06:01.968 15:41:36 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:06:01.968 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:06:01.968 No valid GPT data, bailing 00:06:01.968 15:41:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:02.227 15:41:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:02.227 15:41:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:02.227 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:06:02.227 15:41:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:06:02.227 15:41:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:06:02.227 15:41:36 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:02.227 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:02.227 15:41:36 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:02.227 15:41:36 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:06:02.227 15:41:36 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:02.227 15:41:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:06:02.227 15:41:36 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:06:02.227 15:41:36 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:06:02.227 15:41:36 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:06:02.227 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:06:02.227 15:41:36 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:06:02.227 15:41:36 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:06:02.227 No valid GPT data, bailing 00:06:02.228 15:41:36 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:02.228 15:41:36 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:02.228 15:41:36 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:02.228 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:06:02.228 15:41:36 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:06:02.228 15:41:36 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:06:02.228 15:41:36 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:06:02.228 15:41:36 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:06:02.228 15:41:36 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:06:02.228 15:41:36 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:02.228 15:41:36 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:02.228 15:41:36 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.228 15:41:36 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.228 15:41:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:02.228 ************************************ 00:06:02.228 START TEST nvme_mount 00:06:02.228 ************************************ 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:02.228 15:41:36 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:03.163 Creating new GPT entries in memory. 00:06:03.163 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:03.163 other utilities. 00:06:03.163 15:41:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:03.163 15:41:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:03.163 15:41:37 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:03.163 15:41:37 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:03.163 15:41:37 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:04.160 Creating new GPT entries in memory. 00:06:04.160 The operation has completed successfully. 00:06:04.160 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:04.160 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:04.160 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 71318 00:06:04.427 15:41:38 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.427 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:04.427 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.427 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:04.427 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:04.427 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.427 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:04.427 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:04.427 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:04.427 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:04.427 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:04.427 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:04.427 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:04.427 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:04.427 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:04.427 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.427 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:04.427 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:04.427 15:41:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:04.427 15:41:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:04.686 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:04.686 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:04.686 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:04.686 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.686 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:04.686 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.945 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:04.945 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.945 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:04.945 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:04.945 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:04.945 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.204 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:05.204 15:41:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.464 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:05.464 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:05.464 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.464 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:05.464 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:05.464 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:05.464 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.464 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.723 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:05.723 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:05.723 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:05.723 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:05.723 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:05.983 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:05.983 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:05.983 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:05.983 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:05.983 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:05.983 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:05.983 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.983 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:05.983 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:05.983 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.983 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:05.983 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:05.983 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:05.983 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:05.983 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:05.983 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:05.983 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:05.983 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:05.984 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:05.984 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:05.984 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:05.984 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:05.984 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.984 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:06.243 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.243 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:06.243 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:06.243 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.243 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.243 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.503 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.503 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.503 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.504 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.504 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.504 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.763 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:06.763 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.023 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:07.023 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:07.023 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:07.023 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:07.023 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:07.283 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:07.283 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:06:07.283 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:07.283 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:07.283 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:07.283 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:07.283 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:07.283 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:07.283 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:07.283 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.283 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:07.283 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:07.283 15:41:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:07.283 15:41:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:07.564 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.564 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:07.564 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:07.564 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.564 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.564 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.824 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.824 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.824 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.824 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.824 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:07.824 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.084 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:08.084 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:08.345 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:08.345 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:08.345 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:08.345 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:08.345 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:08.345 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:08.345 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:08.345 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:08.345 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:08.345 00:06:08.345 real 0m6.269s 00:06:08.345 user 0m1.619s 00:06:08.345 sys 0m2.319s 00:06:08.345 15:41:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.345 15:41:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:08.345 ************************************ 00:06:08.345 END TEST nvme_mount 00:06:08.345 ************************************ 00:06:08.605 15:41:43 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:08.605 15:41:43 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.605 15:41:43 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.605 15:41:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:08.605 ************************************ 00:06:08.605 START TEST dm_mount 00:06:08.605 ************************************ 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:08.605 15:41:43 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:09.541 Creating new GPT entries in memory. 00:06:09.541 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:09.541 other utilities. 00:06:09.541 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:09.541 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:09.541 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:09.541 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:09.541 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:10.917 Creating new GPT entries in memory. 00:06:10.917 The operation has completed successfully. 00:06:10.917 15:41:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:10.917 15:41:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:10.917 15:41:45 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:10.917 15:41:45 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:10.917 15:41:45 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:11.855 The operation has completed successfully. 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 71954 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:11.855 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:12.114 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.114 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:12.114 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:12.114 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.114 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.114 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.373 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.373 15:41:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.373 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.373 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.373 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.373 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.942 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:12.942 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.942 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:12.942 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:12.942 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:12.942 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:12.942 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:12.942 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:13.200 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:13.200 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:13.200 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:13.200 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:13.200 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:13.200 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:13.200 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:13.200 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:13.200 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.200 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:13.200 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:13.200 15:41:47 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:13.200 15:41:47 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:13.459 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:13.459 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:13.459 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:13.459 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.459 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:13.459 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.718 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:13.718 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.718 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:13.718 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.718 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:13.718 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.285 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:14.285 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.285 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:14.285 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:14.285 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:14.285 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:14.285 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:14.285 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:14.285 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:14.543 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:14.543 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:14.543 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:14.543 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:14.543 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:14.543 00:06:14.543 real 0m5.925s 00:06:14.543 user 0m1.118s 00:06:14.543 sys 0m1.707s 00:06:14.543 15:41:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.543 ************************************ 00:06:14.543 15:41:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:14.543 END TEST dm_mount 00:06:14.543 ************************************ 00:06:14.543 15:41:49 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:14.543 15:41:49 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:14.543 15:41:49 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:14.543 15:41:49 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:14.543 15:41:49 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:14.543 15:41:49 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:14.543 15:41:49 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:14.802 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:14.802 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:14.802 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:14.802 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:14.802 15:41:49 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:14.802 15:41:49 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:14.802 15:41:49 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:14.802 15:41:49 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:14.802 15:41:49 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:14.802 15:41:49 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:14.802 15:41:49 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:14.802 00:06:14.802 real 0m14.697s 00:06:14.802 user 0m3.721s 00:06:14.802 sys 0m5.238s 00:06:14.802 15:41:49 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.802 ************************************ 00:06:14.802 END TEST devices 00:06:14.802 ************************************ 00:06:14.802 15:41:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:14.802 00:06:14.802 real 0m52.067s 00:06:14.802 user 0m12.225s 00:06:14.802 sys 0m19.848s 00:06:14.802 15:41:49 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.802 15:41:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:14.802 ************************************ 00:06:14.802 END TEST setup.sh 00:06:14.802 ************************************ 00:06:15.060 15:41:49 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:15.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:16.195 Hugepages 00:06:16.195 node hugesize free / total 00:06:16.195 node0 1048576kB 0 / 0 00:06:16.195 node0 2048kB 2048 / 2048 00:06:16.195 00:06:16.195 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:16.454 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:16.454 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:16.714 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:16.714 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:16.974 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:16.974 15:41:51 -- spdk/autotest.sh@130 -- # uname -s 00:06:16.974 15:41:51 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:16.974 15:41:51 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:16.974 15:41:51 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:17.543 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:18.481 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:18.481 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:18.481 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:18.481 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:18.481 15:41:53 -- common/autotest_common.sh@1528 -- # sleep 1 00:06:19.417 15:41:54 -- common/autotest_common.sh@1529 -- # bdfs=() 00:06:19.417 15:41:54 -- common/autotest_common.sh@1529 -- # local bdfs 00:06:19.417 15:41:54 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:06:19.417 15:41:54 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:06:19.417 15:41:54 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:19.417 15:41:54 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:19.417 15:41:54 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:19.676 15:41:54 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:19.676 15:41:54 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:19.676 15:41:54 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:06:19.676 15:41:54 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:19.676 15:41:54 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:20.244 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:20.502 Waiting for block devices as requested 00:06:20.502 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:20.760 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:20.760 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:20.760 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:26.031 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:26.031 15:42:00 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:26.031 15:42:00 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:26.031 15:42:00 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:26.031 15:42:00 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:06:26.031 15:42:00 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:26.031 15:42:00 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:26.031 15:42:00 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:06:26.031 15:42:00 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:06:26.031 15:42:00 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:26.031 15:42:00 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:26.031 15:42:00 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:26.031 15:42:00 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1553 -- # continue 00:06:26.031 15:42:00 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:26.031 15:42:00 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:26.031 15:42:00 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:26.031 15:42:00 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:06:26.031 15:42:00 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:26.031 15:42:00 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:26.031 15:42:00 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:06:26.031 15:42:00 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:06:26.031 15:42:00 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:26.031 15:42:00 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:26.031 15:42:00 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:26.031 15:42:00 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1553 -- # continue 00:06:26.031 15:42:00 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:26.031 15:42:00 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:26.031 15:42:00 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:26.031 15:42:00 -- common/autotest_common.sh@1498 -- # grep 0000:00:12.0/nvme/nvme 00:06:26.031 15:42:00 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:26.031 15:42:00 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:26.031 15:42:00 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme2 00:06:26.031 15:42:00 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme2 00:06:26.031 15:42:00 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme2 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme2 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:26.031 15:42:00 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:26.031 15:42:00 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme2 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:26.031 15:42:00 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1553 -- # continue 00:06:26.031 15:42:00 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:26.031 15:42:00 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:26.031 15:42:00 -- common/autotest_common.sh@1498 -- # grep 0000:00:13.0/nvme/nvme 00:06:26.031 15:42:00 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:26.031 15:42:00 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:26.031 15:42:00 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:26.031 15:42:00 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme3 00:06:26.031 15:42:00 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme3 00:06:26.031 15:42:00 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme3 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme3 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:26.031 15:42:00 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:26.031 15:42:00 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:26.031 15:42:00 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme3 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:26.031 15:42:00 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:26.288 15:42:00 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:26.288 15:42:00 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:26.288 15:42:00 -- common/autotest_common.sh@1553 -- # continue 00:06:26.288 15:42:00 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:26.288 15:42:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.288 15:42:00 -- common/autotest_common.sh@10 -- # set +x 00:06:26.288 15:42:00 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:26.288 15:42:00 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:26.288 15:42:00 -- common/autotest_common.sh@10 -- # set +x 00:06:26.289 15:42:00 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:26.885 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:27.820 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:27.820 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:27.820 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:27.820 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:27.820 15:42:02 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:27.820 15:42:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.820 15:42:02 -- common/autotest_common.sh@10 -- # set +x 00:06:27.820 15:42:02 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:27.820 15:42:02 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:06:27.820 15:42:02 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:06:27.820 15:42:02 -- common/autotest_common.sh@1573 -- # bdfs=() 00:06:27.820 15:42:02 -- common/autotest_common.sh@1573 -- # local bdfs 00:06:27.820 15:42:02 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:06:27.820 15:42:02 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:27.820 15:42:02 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:27.820 15:42:02 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:27.820 15:42:02 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:27.820 15:42:02 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:28.079 15:42:02 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:06:28.079 15:42:02 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:28.079 15:42:02 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:28.079 15:42:02 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:28.079 15:42:02 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:28.079 15:42:02 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:28.079 15:42:02 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:28.079 15:42:02 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:28.079 15:42:02 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:28.079 15:42:02 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:28.079 15:42:02 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:28.079 15:42:02 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:28.079 15:42:02 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:28.079 15:42:02 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:28.079 15:42:02 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:28.079 15:42:02 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:28.079 15:42:02 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:28.079 15:42:02 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:28.079 15:42:02 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:06:28.079 15:42:02 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:06:28.079 15:42:02 -- common/autotest_common.sh@1589 -- # return 0 00:06:28.079 15:42:02 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:28.079 15:42:02 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:28.079 15:42:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:28.079 15:42:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:28.079 15:42:02 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:28.079 15:42:02 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:28.079 15:42:02 -- common/autotest_common.sh@10 -- # set +x 00:06:28.079 15:42:02 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:28.079 15:42:02 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:28.079 15:42:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.079 15:42:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.079 15:42:02 -- common/autotest_common.sh@10 -- # set +x 00:06:28.079 ************************************ 00:06:28.079 START TEST env 00:06:28.079 ************************************ 00:06:28.079 15:42:02 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:28.338 * Looking for test storage... 00:06:28.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:28.338 15:42:02 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:28.338 15:42:02 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.338 15:42:02 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.338 15:42:02 env -- common/autotest_common.sh@10 -- # set +x 00:06:28.338 ************************************ 00:06:28.338 START TEST env_memory 00:06:28.338 ************************************ 00:06:28.338 15:42:02 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:28.338 00:06:28.338 00:06:28.338 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.338 http://cunit.sourceforge.net/ 00:06:28.338 00:06:28.338 00:06:28.338 Suite: memory 00:06:28.338 Test: alloc and free memory map ...[2024-07-20 15:42:02.979199] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:28.338 passed 00:06:28.338 Test: mem map translation ...[2024-07-20 15:42:03.020327] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:28.338 [2024-07-20 15:42:03.020384] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:28.338 [2024-07-20 15:42:03.020451] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:28.338 [2024-07-20 15:42:03.020475] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:28.338 passed 00:06:28.338 Test: mem map registration ...[2024-07-20 15:42:03.084134] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:28.338 [2024-07-20 15:42:03.084180] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:28.338 passed 00:06:28.597 Test: mem map adjacent registrations ...passed 00:06:28.597 00:06:28.597 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.597 suites 1 1 n/a 0 0 00:06:28.597 tests 4 4 4 0 0 00:06:28.597 asserts 152 152 152 0 n/a 00:06:28.597 00:06:28.597 Elapsed time = 0.226 seconds 00:06:28.597 00:06:28.597 real 0m0.267s 00:06:28.597 user 0m0.239s 00:06:28.597 sys 0m0.019s 00:06:28.597 15:42:03 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.597 15:42:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:28.597 ************************************ 00:06:28.597 END TEST env_memory 00:06:28.597 ************************************ 00:06:28.597 15:42:03 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:28.597 15:42:03 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.597 15:42:03 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.597 15:42:03 env -- common/autotest_common.sh@10 -- # set +x 00:06:28.597 ************************************ 00:06:28.597 START TEST env_vtophys 00:06:28.597 ************************************ 00:06:28.597 15:42:03 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:28.597 EAL: lib.eal log level changed from notice to debug 00:06:28.597 EAL: Detected lcore 0 as core 0 on socket 0 00:06:28.597 EAL: Detected lcore 1 as core 0 on socket 0 00:06:28.597 EAL: Detected lcore 2 as core 0 on socket 0 00:06:28.597 EAL: Detected lcore 3 as core 0 on socket 0 00:06:28.597 EAL: Detected lcore 4 as core 0 on socket 0 00:06:28.597 EAL: Detected lcore 5 as core 0 on socket 0 00:06:28.597 EAL: Detected lcore 6 as core 0 on socket 0 00:06:28.597 EAL: Detected lcore 7 as core 0 on socket 0 00:06:28.597 EAL: Detected lcore 8 as core 0 on socket 0 00:06:28.597 EAL: Detected lcore 9 as core 0 on socket 0 00:06:28.597 EAL: Maximum logical cores by configuration: 128 00:06:28.597 EAL: Detected CPU lcores: 10 00:06:28.597 EAL: Detected NUMA nodes: 1 00:06:28.597 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:28.597 EAL: Detected shared linkage of DPDK 00:06:28.597 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:28.597 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:28.597 EAL: Registered [vdev] bus. 00:06:28.597 EAL: bus.vdev log level changed from disabled to notice 00:06:28.597 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:28.597 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:28.597 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:28.597 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:28.597 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:28.597 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:28.597 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:28.597 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:28.597 EAL: No shared files mode enabled, IPC will be disabled 00:06:28.597 EAL: No shared files mode enabled, IPC is disabled 00:06:28.597 EAL: Selected IOVA mode 'PA' 00:06:28.597 EAL: Probing VFIO support... 00:06:28.597 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:28.597 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:28.597 EAL: Ask a virtual area of 0x2e000 bytes 00:06:28.597 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:28.597 EAL: Setting up physically contiguous memory... 00:06:28.597 EAL: Setting maximum number of open files to 524288 00:06:28.597 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:28.597 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:28.597 EAL: Ask a virtual area of 0x61000 bytes 00:06:28.597 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:28.597 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:28.597 EAL: Ask a virtual area of 0x400000000 bytes 00:06:28.597 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:28.597 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:28.597 EAL: Ask a virtual area of 0x61000 bytes 00:06:28.597 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:28.597 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:28.597 EAL: Ask a virtual area of 0x400000000 bytes 00:06:28.597 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:28.597 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:28.597 EAL: Ask a virtual area of 0x61000 bytes 00:06:28.597 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:28.597 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:28.597 EAL: Ask a virtual area of 0x400000000 bytes 00:06:28.597 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:28.597 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:28.597 EAL: Ask a virtual area of 0x61000 bytes 00:06:28.597 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:28.597 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:28.597 EAL: Ask a virtual area of 0x400000000 bytes 00:06:28.597 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:28.597 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:28.597 EAL: Hugepages will be freed exactly as allocated. 00:06:28.597 EAL: No shared files mode enabled, IPC is disabled 00:06:28.597 EAL: No shared files mode enabled, IPC is disabled 00:06:28.855 EAL: TSC frequency is ~2490000 KHz 00:06:28.855 EAL: Main lcore 0 is ready (tid=7efcc401aa40;cpuset=[0]) 00:06:28.855 EAL: Trying to obtain current memory policy. 00:06:28.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:28.855 EAL: Restoring previous memory policy: 0 00:06:28.855 EAL: request: mp_malloc_sync 00:06:28.855 EAL: No shared files mode enabled, IPC is disabled 00:06:28.855 EAL: Heap on socket 0 was expanded by 2MB 00:06:28.855 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:28.855 EAL: No shared files mode enabled, IPC is disabled 00:06:28.856 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:28.856 EAL: Mem event callback 'spdk:(nil)' registered 00:06:28.856 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:28.856 00:06:28.856 00:06:28.856 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.856 http://cunit.sourceforge.net/ 00:06:28.856 00:06:28.856 00:06:28.856 Suite: components_suite 00:06:29.114 Test: vtophys_malloc_test ...passed 00:06:29.114 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:29.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.114 EAL: Restoring previous memory policy: 4 00:06:29.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.114 EAL: request: mp_malloc_sync 00:06:29.114 EAL: No shared files mode enabled, IPC is disabled 00:06:29.114 EAL: Heap on socket 0 was expanded by 4MB 00:06:29.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.114 EAL: request: mp_malloc_sync 00:06:29.114 EAL: No shared files mode enabled, IPC is disabled 00:06:29.114 EAL: Heap on socket 0 was shrunk by 4MB 00:06:29.114 EAL: Trying to obtain current memory policy. 00:06:29.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.114 EAL: Restoring previous memory policy: 4 00:06:29.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.114 EAL: request: mp_malloc_sync 00:06:29.114 EAL: No shared files mode enabled, IPC is disabled 00:06:29.114 EAL: Heap on socket 0 was expanded by 6MB 00:06:29.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.114 EAL: request: mp_malloc_sync 00:06:29.114 EAL: No shared files mode enabled, IPC is disabled 00:06:29.114 EAL: Heap on socket 0 was shrunk by 6MB 00:06:29.114 EAL: Trying to obtain current memory policy. 00:06:29.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.114 EAL: Restoring previous memory policy: 4 00:06:29.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.114 EAL: request: mp_malloc_sync 00:06:29.114 EAL: No shared files mode enabled, IPC is disabled 00:06:29.114 EAL: Heap on socket 0 was expanded by 10MB 00:06:29.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.114 EAL: request: mp_malloc_sync 00:06:29.114 EAL: No shared files mode enabled, IPC is disabled 00:06:29.114 EAL: Heap on socket 0 was shrunk by 10MB 00:06:29.114 EAL: Trying to obtain current memory policy. 00:06:29.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.114 EAL: Restoring previous memory policy: 4 00:06:29.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.114 EAL: request: mp_malloc_sync 00:06:29.114 EAL: No shared files mode enabled, IPC is disabled 00:06:29.114 EAL: Heap on socket 0 was expanded by 18MB 00:06:29.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.114 EAL: request: mp_malloc_sync 00:06:29.114 EAL: No shared files mode enabled, IPC is disabled 00:06:29.114 EAL: Heap on socket 0 was shrunk by 18MB 00:06:29.114 EAL: Trying to obtain current memory policy. 00:06:29.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.114 EAL: Restoring previous memory policy: 4 00:06:29.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.114 EAL: request: mp_malloc_sync 00:06:29.114 EAL: No shared files mode enabled, IPC is disabled 00:06:29.114 EAL: Heap on socket 0 was expanded by 34MB 00:06:29.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.114 EAL: request: mp_malloc_sync 00:06:29.114 EAL: No shared files mode enabled, IPC is disabled 00:06:29.114 EAL: Heap on socket 0 was shrunk by 34MB 00:06:29.114 EAL: Trying to obtain current memory policy. 00:06:29.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.114 EAL: Restoring previous memory policy: 4 00:06:29.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.114 EAL: request: mp_malloc_sync 00:06:29.114 EAL: No shared files mode enabled, IPC is disabled 00:06:29.114 EAL: Heap on socket 0 was expanded by 66MB 00:06:29.114 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.114 EAL: request: mp_malloc_sync 00:06:29.114 EAL: No shared files mode enabled, IPC is disabled 00:06:29.114 EAL: Heap on socket 0 was shrunk by 66MB 00:06:29.114 EAL: Trying to obtain current memory policy. 00:06:29.115 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.373 EAL: Restoring previous memory policy: 4 00:06:29.373 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.373 EAL: request: mp_malloc_sync 00:06:29.373 EAL: No shared files mode enabled, IPC is disabled 00:06:29.373 EAL: Heap on socket 0 was expanded by 130MB 00:06:29.373 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.373 EAL: request: mp_malloc_sync 00:06:29.373 EAL: No shared files mode enabled, IPC is disabled 00:06:29.373 EAL: Heap on socket 0 was shrunk by 130MB 00:06:29.373 EAL: Trying to obtain current memory policy. 00:06:29.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.373 EAL: Restoring previous memory policy: 4 00:06:29.373 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.373 EAL: request: mp_malloc_sync 00:06:29.373 EAL: No shared files mode enabled, IPC is disabled 00:06:29.373 EAL: Heap on socket 0 was expanded by 258MB 00:06:29.373 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.373 EAL: request: mp_malloc_sync 00:06:29.373 EAL: No shared files mode enabled, IPC is disabled 00:06:29.373 EAL: Heap on socket 0 was shrunk by 258MB 00:06:29.373 EAL: Trying to obtain current memory policy. 00:06:29.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.631 EAL: Restoring previous memory policy: 4 00:06:29.631 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.631 EAL: request: mp_malloc_sync 00:06:29.631 EAL: No shared files mode enabled, IPC is disabled 00:06:29.631 EAL: Heap on socket 0 was expanded by 514MB 00:06:29.631 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.631 EAL: request: mp_malloc_sync 00:06:29.632 EAL: No shared files mode enabled, IPC is disabled 00:06:29.632 EAL: Heap on socket 0 was shrunk by 514MB 00:06:29.632 EAL: Trying to obtain current memory policy. 00:06:29.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:29.890 EAL: Restoring previous memory policy: 4 00:06:29.890 EAL: Calling mem event callback 'spdk:(nil)' 00:06:29.890 EAL: request: mp_malloc_sync 00:06:29.890 EAL: No shared files mode enabled, IPC is disabled 00:06:29.890 EAL: Heap on socket 0 was expanded by 1026MB 00:06:30.147 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.406 passed 00:06:30.406 00:06:30.406 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.406 suites 1 1 n/a 0 0 00:06:30.406 tests 2 2 2 0 0 00:06:30.407 asserts 5386 5386 5386 0 n/a 00:06:30.407 00:06:30.407 Elapsed time = 1.493 seconds 00:06:30.407 EAL: request: mp_malloc_sync 00:06:30.407 EAL: No shared files mode enabled, IPC is disabled 00:06:30.407 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:30.407 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.407 EAL: request: mp_malloc_sync 00:06:30.407 EAL: No shared files mode enabled, IPC is disabled 00:06:30.407 EAL: Heap on socket 0 was shrunk by 2MB 00:06:30.407 EAL: No shared files mode enabled, IPC is disabled 00:06:30.407 EAL: No shared files mode enabled, IPC is disabled 00:06:30.407 EAL: No shared files mode enabled, IPC is disabled 00:06:30.407 00:06:30.407 real 0m1.732s 00:06:30.407 user 0m0.835s 00:06:30.407 sys 0m0.766s 00:06:30.407 15:42:04 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.407 15:42:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:30.407 ************************************ 00:06:30.407 END TEST env_vtophys 00:06:30.407 ************************************ 00:06:30.407 15:42:05 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:30.407 15:42:05 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.407 15:42:05 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.407 15:42:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:30.407 ************************************ 00:06:30.407 START TEST env_pci 00:06:30.407 ************************************ 00:06:30.407 15:42:05 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:30.407 00:06:30.407 00:06:30.407 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.407 http://cunit.sourceforge.net/ 00:06:30.407 00:06:30.407 00:06:30.407 Suite: pci 00:06:30.407 Test: pci_hook ...[2024-07-20 15:42:05.078251] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 73746 has claimed it 00:06:30.407 passed 00:06:30.407 00:06:30.407 EAL: Cannot find device (10000:00:01.0) 00:06:30.407 EAL: Failed to attach device on primary process 00:06:30.407 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.407 suites 1 1 n/a 0 0 00:06:30.407 tests 1 1 1 0 0 00:06:30.407 asserts 25 25 25 0 n/a 00:06:30.407 00:06:30.407 Elapsed time = 0.010 seconds 00:06:30.407 00:06:30.407 real 0m0.098s 00:06:30.407 user 0m0.045s 00:06:30.407 sys 0m0.053s 00:06:30.407 15:42:05 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.407 15:42:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:30.407 ************************************ 00:06:30.407 END TEST env_pci 00:06:30.407 ************************************ 00:06:30.666 15:42:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:30.666 15:42:05 env -- env/env.sh@15 -- # uname 00:06:30.666 15:42:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:30.666 15:42:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:30.666 15:42:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:30.666 15:42:05 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:30.666 15:42:05 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.666 15:42:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:30.666 ************************************ 00:06:30.666 START TEST env_dpdk_post_init 00:06:30.666 ************************************ 00:06:30.666 15:42:05 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:30.666 EAL: Detected CPU lcores: 10 00:06:30.666 EAL: Detected NUMA nodes: 1 00:06:30.666 EAL: Detected shared linkage of DPDK 00:06:30.666 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:30.666 EAL: Selected IOVA mode 'PA' 00:06:30.666 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:30.666 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:30.666 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:30.666 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:30.666 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:30.924 Starting DPDK initialization... 00:06:30.924 Starting SPDK post initialization... 00:06:30.924 SPDK NVMe probe 00:06:30.924 Attaching to 0000:00:10.0 00:06:30.924 Attaching to 0000:00:11.0 00:06:30.924 Attaching to 0000:00:12.0 00:06:30.924 Attaching to 0000:00:13.0 00:06:30.924 Attached to 0000:00:10.0 00:06:30.924 Attached to 0000:00:11.0 00:06:30.924 Attached to 0000:00:13.0 00:06:30.924 Attached to 0000:00:12.0 00:06:30.924 Cleaning up... 00:06:30.924 00:06:30.924 real 0m0.248s 00:06:30.924 user 0m0.077s 00:06:30.924 sys 0m0.072s 00:06:30.924 15:42:05 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.924 15:42:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:30.924 ************************************ 00:06:30.924 END TEST env_dpdk_post_init 00:06:30.924 ************************************ 00:06:30.924 15:42:05 env -- env/env.sh@26 -- # uname 00:06:30.924 15:42:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:30.924 15:42:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:30.924 15:42:05 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.924 15:42:05 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.924 15:42:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:30.924 ************************************ 00:06:30.924 START TEST env_mem_callbacks 00:06:30.924 ************************************ 00:06:30.924 15:42:05 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:30.924 EAL: Detected CPU lcores: 10 00:06:30.924 EAL: Detected NUMA nodes: 1 00:06:30.924 EAL: Detected shared linkage of DPDK 00:06:30.924 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:30.924 EAL: Selected IOVA mode 'PA' 00:06:30.924 00:06:30.924 00:06:30.924 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.924 http://cunit.sourceforge.net/ 00:06:30.924 00:06:30.924 00:06:30.924 Suite: memory 00:06:30.924 Test: test ... 00:06:30.924 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:30.924 register 0x200000200000 2097152 00:06:30.924 malloc 3145728 00:06:30.924 register 0x200000400000 4194304 00:06:30.924 buf 0x200000500000 len 3145728 PASSED 00:06:30.924 malloc 64 00:06:30.924 buf 0x2000004fff40 len 64 PASSED 00:06:30.924 malloc 4194304 00:06:30.924 register 0x200000800000 6291456 00:06:30.924 buf 0x200000a00000 len 4194304 PASSED 00:06:30.924 free 0x200000500000 3145728 00:06:30.924 free 0x2000004fff40 64 00:06:30.924 unregister 0x200000400000 4194304 PASSED 00:06:30.924 free 0x200000a00000 4194304 00:06:30.924 unregister 0x200000800000 6291456 PASSED 00:06:30.924 malloc 8388608 00:06:31.183 register 0x200000400000 10485760 00:06:31.183 buf 0x200000600000 len 8388608 PASSED 00:06:31.183 free 0x200000600000 8388608 00:06:31.183 unregister 0x200000400000 10485760 PASSED 00:06:31.183 passed 00:06:31.183 00:06:31.183 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.183 suites 1 1 n/a 0 0 00:06:31.183 tests 1 1 1 0 0 00:06:31.183 asserts 15 15 15 0 n/a 00:06:31.183 00:06:31.183 Elapsed time = 0.013 seconds 00:06:31.183 00:06:31.183 real 0m0.190s 00:06:31.183 user 0m0.033s 00:06:31.183 sys 0m0.056s 00:06:31.183 15:42:05 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.183 15:42:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:31.183 ************************************ 00:06:31.183 END TEST env_mem_callbacks 00:06:31.183 ************************************ 00:06:31.183 00:06:31.183 real 0m3.036s 00:06:31.183 user 0m1.390s 00:06:31.183 sys 0m1.293s 00:06:31.183 15:42:05 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.183 15:42:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:31.183 ************************************ 00:06:31.183 END TEST env 00:06:31.183 ************************************ 00:06:31.183 15:42:05 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:31.183 15:42:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.183 15:42:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.183 15:42:05 -- common/autotest_common.sh@10 -- # set +x 00:06:31.183 ************************************ 00:06:31.183 START TEST rpc 00:06:31.183 ************************************ 00:06:31.183 15:42:05 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:31.441 * Looking for test storage... 00:06:31.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:31.441 15:42:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=73865 00:06:31.441 15:42:06 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:31.441 15:42:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.441 15:42:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 73865 00:06:31.441 15:42:06 rpc -- common/autotest_common.sh@827 -- # '[' -z 73865 ']' 00:06:31.441 15:42:06 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.441 15:42:06 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.441 15:42:06 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.441 15:42:06 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.441 15:42:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.441 [2024-07-20 15:42:06.117919] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:31.441 [2024-07-20 15:42:06.118070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73865 ] 00:06:31.699 [2024-07-20 15:42:06.269052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.699 [2024-07-20 15:42:06.313062] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:31.699 [2024-07-20 15:42:06.313122] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 73865' to capture a snapshot of events at runtime. 00:06:31.699 [2024-07-20 15:42:06.313136] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.699 [2024-07-20 15:42:06.313148] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.699 [2024-07-20 15:42:06.313177] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid73865 for offline analysis/debug. 00:06:31.699 [2024-07-20 15:42:06.313219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.266 15:42:06 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.266 15:42:06 rpc -- common/autotest_common.sh@860 -- # return 0 00:06:32.266 15:42:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:32.266 15:42:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:32.266 15:42:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:32.266 15:42:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:32.266 15:42:06 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.266 15:42:06 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.266 15:42:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.266 ************************************ 00:06:32.266 START TEST rpc_integrity 00:06:32.266 ************************************ 00:06:32.266 15:42:06 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:32.266 15:42:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:32.266 15:42:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.266 15:42:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.266 15:42:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.266 15:42:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:32.266 15:42:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:32.266 15:42:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:32.266 15:42:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:32.266 15:42:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.266 15:42:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.266 15:42:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.266 15:42:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:32.266 15:42:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:32.266 15:42:06 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.266 15:42:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.266 15:42:06 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.266 15:42:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:32.266 { 00:06:32.266 "name": "Malloc0", 00:06:32.266 "aliases": [ 00:06:32.266 "c25e949a-f874-410d-8835-e6cfc4df3b51" 00:06:32.266 ], 00:06:32.266 "product_name": "Malloc disk", 00:06:32.266 "block_size": 512, 00:06:32.266 "num_blocks": 16384, 00:06:32.266 "uuid": "c25e949a-f874-410d-8835-e6cfc4df3b51", 00:06:32.266 "assigned_rate_limits": { 00:06:32.266 "rw_ios_per_sec": 0, 00:06:32.266 "rw_mbytes_per_sec": 0, 00:06:32.266 "r_mbytes_per_sec": 0, 00:06:32.266 "w_mbytes_per_sec": 0 00:06:32.266 }, 00:06:32.266 "claimed": false, 00:06:32.266 "zoned": false, 00:06:32.266 "supported_io_types": { 00:06:32.266 "read": true, 00:06:32.266 "write": true, 00:06:32.266 "unmap": true, 00:06:32.266 "write_zeroes": true, 00:06:32.266 "flush": true, 00:06:32.266 "reset": true, 00:06:32.266 "compare": false, 00:06:32.266 "compare_and_write": false, 00:06:32.266 "abort": true, 00:06:32.266 "nvme_admin": false, 00:06:32.266 "nvme_io": false 00:06:32.266 }, 00:06:32.266 "memory_domains": [ 00:06:32.266 { 00:06:32.266 "dma_device_id": "system", 00:06:32.266 "dma_device_type": 1 00:06:32.266 }, 00:06:32.266 { 00:06:32.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.266 "dma_device_type": 2 00:06:32.266 } 00:06:32.266 ], 00:06:32.266 "driver_specific": {} 00:06:32.266 } 00:06:32.266 ]' 00:06:32.266 15:42:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:32.266 15:42:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:32.266 15:42:07 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:32.266 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.266 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.266 [2024-07-20 15:42:07.038779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:32.266 [2024-07-20 15:42:07.038865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:32.266 [2024-07-20 15:42:07.038899] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:32.266 [2024-07-20 15:42:07.038931] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:32.266 [2024-07-20 15:42:07.041442] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:32.266 [2024-07-20 15:42:07.041488] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:32.266 Passthru0 00:06:32.266 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.266 15:42:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:32.266 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.266 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.525 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.525 15:42:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:32.525 { 00:06:32.525 "name": "Malloc0", 00:06:32.525 "aliases": [ 00:06:32.525 "c25e949a-f874-410d-8835-e6cfc4df3b51" 00:06:32.525 ], 00:06:32.525 "product_name": "Malloc disk", 00:06:32.525 "block_size": 512, 00:06:32.525 "num_blocks": 16384, 00:06:32.525 "uuid": "c25e949a-f874-410d-8835-e6cfc4df3b51", 00:06:32.525 "assigned_rate_limits": { 00:06:32.525 "rw_ios_per_sec": 0, 00:06:32.525 "rw_mbytes_per_sec": 0, 00:06:32.525 "r_mbytes_per_sec": 0, 00:06:32.525 "w_mbytes_per_sec": 0 00:06:32.525 }, 00:06:32.525 "claimed": true, 00:06:32.525 "claim_type": "exclusive_write", 00:06:32.525 "zoned": false, 00:06:32.525 "supported_io_types": { 00:06:32.525 "read": true, 00:06:32.525 "write": true, 00:06:32.525 "unmap": true, 00:06:32.525 "write_zeroes": true, 00:06:32.525 "flush": true, 00:06:32.525 "reset": true, 00:06:32.525 "compare": false, 00:06:32.525 "compare_and_write": false, 00:06:32.525 "abort": true, 00:06:32.525 "nvme_admin": false, 00:06:32.525 "nvme_io": false 00:06:32.525 }, 00:06:32.525 "memory_domains": [ 00:06:32.525 { 00:06:32.525 "dma_device_id": "system", 00:06:32.525 "dma_device_type": 1 00:06:32.525 }, 00:06:32.525 { 00:06:32.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.525 "dma_device_type": 2 00:06:32.525 } 00:06:32.525 ], 00:06:32.525 "driver_specific": {} 00:06:32.525 }, 00:06:32.525 { 00:06:32.525 "name": "Passthru0", 00:06:32.525 "aliases": [ 00:06:32.525 "1afe2d5e-5c07-5aba-8b18-a049328b9f2e" 00:06:32.525 ], 00:06:32.525 "product_name": "passthru", 00:06:32.525 "block_size": 512, 00:06:32.525 "num_blocks": 16384, 00:06:32.525 "uuid": "1afe2d5e-5c07-5aba-8b18-a049328b9f2e", 00:06:32.525 "assigned_rate_limits": { 00:06:32.525 "rw_ios_per_sec": 0, 00:06:32.525 "rw_mbytes_per_sec": 0, 00:06:32.525 "r_mbytes_per_sec": 0, 00:06:32.525 "w_mbytes_per_sec": 0 00:06:32.525 }, 00:06:32.525 "claimed": false, 00:06:32.525 "zoned": false, 00:06:32.525 "supported_io_types": { 00:06:32.525 "read": true, 00:06:32.525 "write": true, 00:06:32.525 "unmap": true, 00:06:32.525 "write_zeroes": true, 00:06:32.525 "flush": true, 00:06:32.525 "reset": true, 00:06:32.525 "compare": false, 00:06:32.525 "compare_and_write": false, 00:06:32.525 "abort": true, 00:06:32.525 "nvme_admin": false, 00:06:32.525 "nvme_io": false 00:06:32.525 }, 00:06:32.525 "memory_domains": [ 00:06:32.525 { 00:06:32.525 "dma_device_id": "system", 00:06:32.525 "dma_device_type": 1 00:06:32.525 }, 00:06:32.525 { 00:06:32.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.525 "dma_device_type": 2 00:06:32.525 } 00:06:32.525 ], 00:06:32.525 "driver_specific": { 00:06:32.525 "passthru": { 00:06:32.525 "name": "Passthru0", 00:06:32.525 "base_bdev_name": "Malloc0" 00:06:32.525 } 00:06:32.525 } 00:06:32.525 } 00:06:32.525 ]' 00:06:32.525 15:42:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:32.525 15:42:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:32.525 15:42:07 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:32.525 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.525 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.525 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.525 15:42:07 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:32.525 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.525 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.525 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.525 15:42:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:32.525 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.525 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.525 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.525 15:42:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:32.525 15:42:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:32.525 ************************************ 00:06:32.525 END TEST rpc_integrity 00:06:32.525 ************************************ 00:06:32.525 15:42:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:32.525 00:06:32.525 real 0m0.292s 00:06:32.525 user 0m0.174s 00:06:32.525 sys 0m0.055s 00:06:32.525 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.525 15:42:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:32.525 15:42:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:32.526 15:42:07 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.526 15:42:07 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.526 15:42:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.526 ************************************ 00:06:32.526 START TEST rpc_plugins 00:06:32.526 ************************************ 00:06:32.526 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:06:32.526 15:42:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:32.526 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.526 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:32.526 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.526 15:42:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:32.526 15:42:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:32.526 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.526 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:32.526 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.526 15:42:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:32.526 { 00:06:32.526 "name": "Malloc1", 00:06:32.526 "aliases": [ 00:06:32.526 "0bdec5d5-6f97-46f2-a78f-eb38b65584d7" 00:06:32.526 ], 00:06:32.526 "product_name": "Malloc disk", 00:06:32.526 "block_size": 4096, 00:06:32.526 "num_blocks": 256, 00:06:32.526 "uuid": "0bdec5d5-6f97-46f2-a78f-eb38b65584d7", 00:06:32.526 "assigned_rate_limits": { 00:06:32.526 "rw_ios_per_sec": 0, 00:06:32.526 "rw_mbytes_per_sec": 0, 00:06:32.526 "r_mbytes_per_sec": 0, 00:06:32.526 "w_mbytes_per_sec": 0 00:06:32.526 }, 00:06:32.526 "claimed": false, 00:06:32.526 "zoned": false, 00:06:32.526 "supported_io_types": { 00:06:32.526 "read": true, 00:06:32.526 "write": true, 00:06:32.526 "unmap": true, 00:06:32.526 "write_zeroes": true, 00:06:32.526 "flush": true, 00:06:32.526 "reset": true, 00:06:32.526 "compare": false, 00:06:32.526 "compare_and_write": false, 00:06:32.526 "abort": true, 00:06:32.526 "nvme_admin": false, 00:06:32.526 "nvme_io": false 00:06:32.526 }, 00:06:32.526 "memory_domains": [ 00:06:32.526 { 00:06:32.526 "dma_device_id": "system", 00:06:32.526 "dma_device_type": 1 00:06:32.526 }, 00:06:32.526 { 00:06:32.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.526 "dma_device_type": 2 00:06:32.526 } 00:06:32.526 ], 00:06:32.526 "driver_specific": {} 00:06:32.526 } 00:06:32.526 ]' 00:06:32.526 15:42:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:32.784 15:42:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:32.784 15:42:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:32.784 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.784 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:32.784 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.784 15:42:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:32.784 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.784 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:32.784 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.784 15:42:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:32.784 15:42:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:32.784 ************************************ 00:06:32.784 END TEST rpc_plugins 00:06:32.784 ************************************ 00:06:32.784 15:42:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:32.784 00:06:32.784 real 0m0.154s 00:06:32.785 user 0m0.097s 00:06:32.785 sys 0m0.025s 00:06:32.785 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.785 15:42:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:32.785 15:42:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:32.785 15:42:07 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.785 15:42:07 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.785 15:42:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.785 ************************************ 00:06:32.785 START TEST rpc_trace_cmd_test 00:06:32.785 ************************************ 00:06:32.785 15:42:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:06:32.785 15:42:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:32.785 15:42:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:32.785 15:42:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.785 15:42:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.785 15:42:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.785 15:42:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:32.785 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid73865", 00:06:32.785 "tpoint_group_mask": "0x8", 00:06:32.785 "iscsi_conn": { 00:06:32.785 "mask": "0x2", 00:06:32.785 "tpoint_mask": "0x0" 00:06:32.785 }, 00:06:32.785 "scsi": { 00:06:32.785 "mask": "0x4", 00:06:32.785 "tpoint_mask": "0x0" 00:06:32.785 }, 00:06:32.785 "bdev": { 00:06:32.785 "mask": "0x8", 00:06:32.785 "tpoint_mask": "0xffffffffffffffff" 00:06:32.785 }, 00:06:32.785 "nvmf_rdma": { 00:06:32.785 "mask": "0x10", 00:06:32.785 "tpoint_mask": "0x0" 00:06:32.785 }, 00:06:32.785 "nvmf_tcp": { 00:06:32.785 "mask": "0x20", 00:06:32.785 "tpoint_mask": "0x0" 00:06:32.785 }, 00:06:32.785 "ftl": { 00:06:32.785 "mask": "0x40", 00:06:32.785 "tpoint_mask": "0x0" 00:06:32.785 }, 00:06:32.785 "blobfs": { 00:06:32.785 "mask": "0x80", 00:06:32.785 "tpoint_mask": "0x0" 00:06:32.785 }, 00:06:32.785 "dsa": { 00:06:32.785 "mask": "0x200", 00:06:32.785 "tpoint_mask": "0x0" 00:06:32.785 }, 00:06:32.785 "thread": { 00:06:32.785 "mask": "0x400", 00:06:32.785 "tpoint_mask": "0x0" 00:06:32.785 }, 00:06:32.785 "nvme_pcie": { 00:06:32.785 "mask": "0x800", 00:06:32.785 "tpoint_mask": "0x0" 00:06:32.785 }, 00:06:32.785 "iaa": { 00:06:32.785 "mask": "0x1000", 00:06:32.785 "tpoint_mask": "0x0" 00:06:32.785 }, 00:06:32.785 "nvme_tcp": { 00:06:32.785 "mask": "0x2000", 00:06:32.785 "tpoint_mask": "0x0" 00:06:32.785 }, 00:06:32.785 "bdev_nvme": { 00:06:32.785 "mask": "0x4000", 00:06:32.785 "tpoint_mask": "0x0" 00:06:32.785 }, 00:06:32.785 "sock": { 00:06:32.785 "mask": "0x8000", 00:06:32.785 "tpoint_mask": "0x0" 00:06:32.785 } 00:06:32.785 }' 00:06:32.785 15:42:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:32.785 15:42:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:32.785 15:42:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:32.785 15:42:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:32.785 15:42:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:32.785 15:42:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:32.785 15:42:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:33.044 15:42:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:33.044 15:42:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:33.044 ************************************ 00:06:33.044 END TEST rpc_trace_cmd_test 00:06:33.044 ************************************ 00:06:33.044 15:42:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:33.044 00:06:33.044 real 0m0.161s 00:06:33.044 user 0m0.135s 00:06:33.044 sys 0m0.018s 00:06:33.044 15:42:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.044 15:42:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.044 15:42:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:33.044 15:42:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:33.044 15:42:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:33.044 15:42:07 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.044 15:42:07 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.044 15:42:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.044 ************************************ 00:06:33.044 START TEST rpc_daemon_integrity 00:06:33.044 ************************************ 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:33.044 { 00:06:33.044 "name": "Malloc2", 00:06:33.044 "aliases": [ 00:06:33.044 "998b51d9-05b5-4194-ab9d-637cfac79266" 00:06:33.044 ], 00:06:33.044 "product_name": "Malloc disk", 00:06:33.044 "block_size": 512, 00:06:33.044 "num_blocks": 16384, 00:06:33.044 "uuid": "998b51d9-05b5-4194-ab9d-637cfac79266", 00:06:33.044 "assigned_rate_limits": { 00:06:33.044 "rw_ios_per_sec": 0, 00:06:33.044 "rw_mbytes_per_sec": 0, 00:06:33.044 "r_mbytes_per_sec": 0, 00:06:33.044 "w_mbytes_per_sec": 0 00:06:33.044 }, 00:06:33.044 "claimed": false, 00:06:33.044 "zoned": false, 00:06:33.044 "supported_io_types": { 00:06:33.044 "read": true, 00:06:33.044 "write": true, 00:06:33.044 "unmap": true, 00:06:33.044 "write_zeroes": true, 00:06:33.044 "flush": true, 00:06:33.044 "reset": true, 00:06:33.044 "compare": false, 00:06:33.044 "compare_and_write": false, 00:06:33.044 "abort": true, 00:06:33.044 "nvme_admin": false, 00:06:33.044 "nvme_io": false 00:06:33.044 }, 00:06:33.044 "memory_domains": [ 00:06:33.044 { 00:06:33.044 "dma_device_id": "system", 00:06:33.044 "dma_device_type": 1 00:06:33.044 }, 00:06:33.044 { 00:06:33.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.044 "dma_device_type": 2 00:06:33.044 } 00:06:33.044 ], 00:06:33.044 "driver_specific": {} 00:06:33.044 } 00:06:33.044 ]' 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.044 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.044 [2024-07-20 15:42:07.834489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:33.044 [2024-07-20 15:42:07.834550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:33.044 [2024-07-20 15:42:07.834572] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:33.044 [2024-07-20 15:42:07.834586] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:33.044 [2024-07-20 15:42:07.836938] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:33.044 [2024-07-20 15:42:07.836983] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:33.321 Passthru0 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:33.321 { 00:06:33.321 "name": "Malloc2", 00:06:33.321 "aliases": [ 00:06:33.321 "998b51d9-05b5-4194-ab9d-637cfac79266" 00:06:33.321 ], 00:06:33.321 "product_name": "Malloc disk", 00:06:33.321 "block_size": 512, 00:06:33.321 "num_blocks": 16384, 00:06:33.321 "uuid": "998b51d9-05b5-4194-ab9d-637cfac79266", 00:06:33.321 "assigned_rate_limits": { 00:06:33.321 "rw_ios_per_sec": 0, 00:06:33.321 "rw_mbytes_per_sec": 0, 00:06:33.321 "r_mbytes_per_sec": 0, 00:06:33.321 "w_mbytes_per_sec": 0 00:06:33.321 }, 00:06:33.321 "claimed": true, 00:06:33.321 "claim_type": "exclusive_write", 00:06:33.321 "zoned": false, 00:06:33.321 "supported_io_types": { 00:06:33.321 "read": true, 00:06:33.321 "write": true, 00:06:33.321 "unmap": true, 00:06:33.321 "write_zeroes": true, 00:06:33.321 "flush": true, 00:06:33.321 "reset": true, 00:06:33.321 "compare": false, 00:06:33.321 "compare_and_write": false, 00:06:33.321 "abort": true, 00:06:33.321 "nvme_admin": false, 00:06:33.321 "nvme_io": false 00:06:33.321 }, 00:06:33.321 "memory_domains": [ 00:06:33.321 { 00:06:33.321 "dma_device_id": "system", 00:06:33.321 "dma_device_type": 1 00:06:33.321 }, 00:06:33.321 { 00:06:33.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.321 "dma_device_type": 2 00:06:33.321 } 00:06:33.321 ], 00:06:33.321 "driver_specific": {} 00:06:33.321 }, 00:06:33.321 { 00:06:33.321 "name": "Passthru0", 00:06:33.321 "aliases": [ 00:06:33.321 "20cdf27b-94a4-53b4-a339-9669cad7f96c" 00:06:33.321 ], 00:06:33.321 "product_name": "passthru", 00:06:33.321 "block_size": 512, 00:06:33.321 "num_blocks": 16384, 00:06:33.321 "uuid": "20cdf27b-94a4-53b4-a339-9669cad7f96c", 00:06:33.321 "assigned_rate_limits": { 00:06:33.321 "rw_ios_per_sec": 0, 00:06:33.321 "rw_mbytes_per_sec": 0, 00:06:33.321 "r_mbytes_per_sec": 0, 00:06:33.321 "w_mbytes_per_sec": 0 00:06:33.321 }, 00:06:33.321 "claimed": false, 00:06:33.321 "zoned": false, 00:06:33.321 "supported_io_types": { 00:06:33.321 "read": true, 00:06:33.321 "write": true, 00:06:33.321 "unmap": true, 00:06:33.321 "write_zeroes": true, 00:06:33.321 "flush": true, 00:06:33.321 "reset": true, 00:06:33.321 "compare": false, 00:06:33.321 "compare_and_write": false, 00:06:33.321 "abort": true, 00:06:33.321 "nvme_admin": false, 00:06:33.321 "nvme_io": false 00:06:33.321 }, 00:06:33.321 "memory_domains": [ 00:06:33.321 { 00:06:33.321 "dma_device_id": "system", 00:06:33.321 "dma_device_type": 1 00:06:33.321 }, 00:06:33.321 { 00:06:33.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.321 "dma_device_type": 2 00:06:33.321 } 00:06:33.321 ], 00:06:33.321 "driver_specific": { 00:06:33.321 "passthru": { 00:06:33.321 "name": "Passthru0", 00:06:33.321 "base_bdev_name": "Malloc2" 00:06:33.321 } 00:06:33.321 } 00:06:33.321 } 00:06:33.321 ]' 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:33.321 ************************************ 00:06:33.321 END TEST rpc_daemon_integrity 00:06:33.321 ************************************ 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:33.321 00:06:33.321 real 0m0.310s 00:06:33.321 user 0m0.186s 00:06:33.321 sys 0m0.060s 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.321 15:42:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:33.321 15:42:08 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:33.321 15:42:08 rpc -- rpc/rpc.sh@84 -- # killprocess 73865 00:06:33.321 15:42:08 rpc -- common/autotest_common.sh@946 -- # '[' -z 73865 ']' 00:06:33.321 15:42:08 rpc -- common/autotest_common.sh@950 -- # kill -0 73865 00:06:33.321 15:42:08 rpc -- common/autotest_common.sh@951 -- # uname 00:06:33.321 15:42:08 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.321 15:42:08 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73865 00:06:33.321 killing process with pid 73865 00:06:33.321 15:42:08 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.321 15:42:08 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.321 15:42:08 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73865' 00:06:33.321 15:42:08 rpc -- common/autotest_common.sh@965 -- # kill 73865 00:06:33.321 15:42:08 rpc -- common/autotest_common.sh@970 -- # wait 73865 00:06:33.889 00:06:33.889 real 0m2.596s 00:06:33.889 user 0m3.034s 00:06:33.889 sys 0m0.844s 00:06:33.889 15:42:08 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.889 ************************************ 00:06:33.889 END TEST rpc 00:06:33.889 ************************************ 00:06:33.889 15:42:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.889 15:42:08 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:33.889 15:42:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.889 15:42:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.889 15:42:08 -- common/autotest_common.sh@10 -- # set +x 00:06:33.889 ************************************ 00:06:33.889 START TEST skip_rpc 00:06:33.889 ************************************ 00:06:33.889 15:42:08 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:33.889 * Looking for test storage... 00:06:33.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:33.889 15:42:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:33.889 15:42:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:33.889 15:42:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:33.889 15:42:08 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.889 15:42:08 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.889 15:42:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.148 ************************************ 00:06:34.148 START TEST skip_rpc 00:06:34.148 ************************************ 00:06:34.148 15:42:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:06:34.148 15:42:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=74064 00:06:34.148 15:42:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:34.148 15:42:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.148 15:42:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:34.148 [2024-07-20 15:42:08.789692] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:34.148 [2024-07-20 15:42:08.789995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74064 ] 00:06:34.148 [2024-07-20 15:42:08.940213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.406 [2024-07-20 15:42:08.988130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 74064 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 74064 ']' 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 74064 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74064 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:39.670 killing process with pid 74064 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74064' 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 74064 00:06:39.670 15:42:13 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 74064 00:06:39.670 00:06:39.670 real 0m5.437s 00:06:39.670 user 0m5.022s 00:06:39.670 sys 0m0.333s 00:06:39.670 ************************************ 00:06:39.670 END TEST skip_rpc 00:06:39.670 ************************************ 00:06:39.670 15:42:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.670 15:42:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.670 15:42:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:39.670 15:42:14 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.670 15:42:14 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.670 15:42:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.670 ************************************ 00:06:39.670 START TEST skip_rpc_with_json 00:06:39.670 ************************************ 00:06:39.670 15:42:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:06:39.670 15:42:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:39.670 15:42:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=74146 00:06:39.670 15:42:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.670 15:42:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.670 15:42:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 74146 00:06:39.670 15:42:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 74146 ']' 00:06:39.670 15:42:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.670 15:42:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.670 15:42:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.670 15:42:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.670 15:42:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:39.670 [2024-07-20 15:42:14.293565] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:39.670 [2024-07-20 15:42:14.293704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74146 ] 00:06:39.670 [2024-07-20 15:42:14.444870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.928 [2024-07-20 15:42:14.488780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:40.496 [2024-07-20 15:42:15.071777] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:40.496 request: 00:06:40.496 { 00:06:40.496 "trtype": "tcp", 00:06:40.496 "method": "nvmf_get_transports", 00:06:40.496 "req_id": 1 00:06:40.496 } 00:06:40.496 Got JSON-RPC error response 00:06:40.496 response: 00:06:40.496 { 00:06:40.496 "code": -19, 00:06:40.496 "message": "No such device" 00:06:40.496 } 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:40.496 [2024-07-20 15:42:15.083884] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.496 15:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:40.496 { 00:06:40.496 "subsystems": [ 00:06:40.496 { 00:06:40.496 "subsystem": "keyring", 00:06:40.496 "config": [] 00:06:40.496 }, 00:06:40.496 { 00:06:40.496 "subsystem": "iobuf", 00:06:40.496 "config": [ 00:06:40.496 { 00:06:40.496 "method": "iobuf_set_options", 00:06:40.496 "params": { 00:06:40.496 "small_pool_count": 8192, 00:06:40.496 "large_pool_count": 1024, 00:06:40.496 "small_bufsize": 8192, 00:06:40.496 "large_bufsize": 135168 00:06:40.496 } 00:06:40.496 } 00:06:40.496 ] 00:06:40.496 }, 00:06:40.496 { 00:06:40.496 "subsystem": "sock", 00:06:40.496 "config": [ 00:06:40.496 { 00:06:40.496 "method": "sock_set_default_impl", 00:06:40.496 "params": { 00:06:40.496 "impl_name": "posix" 00:06:40.496 } 00:06:40.496 }, 00:06:40.496 { 00:06:40.496 "method": "sock_impl_set_options", 00:06:40.496 "params": { 00:06:40.496 "impl_name": "ssl", 00:06:40.496 "recv_buf_size": 4096, 00:06:40.496 "send_buf_size": 4096, 00:06:40.496 "enable_recv_pipe": true, 00:06:40.496 "enable_quickack": false, 00:06:40.496 "enable_placement_id": 0, 00:06:40.496 "enable_zerocopy_send_server": true, 00:06:40.496 "enable_zerocopy_send_client": false, 00:06:40.496 "zerocopy_threshold": 0, 00:06:40.496 "tls_version": 0, 00:06:40.496 "enable_ktls": false 00:06:40.496 } 00:06:40.496 }, 00:06:40.496 { 00:06:40.496 "method": "sock_impl_set_options", 00:06:40.496 "params": { 00:06:40.496 "impl_name": "posix", 00:06:40.496 "recv_buf_size": 2097152, 00:06:40.496 "send_buf_size": 2097152, 00:06:40.496 "enable_recv_pipe": true, 00:06:40.496 "enable_quickack": false, 00:06:40.496 "enable_placement_id": 0, 00:06:40.496 "enable_zerocopy_send_server": true, 00:06:40.496 "enable_zerocopy_send_client": false, 00:06:40.496 "zerocopy_threshold": 0, 00:06:40.496 "tls_version": 0, 00:06:40.496 "enable_ktls": false 00:06:40.496 } 00:06:40.496 } 00:06:40.496 ] 00:06:40.496 }, 00:06:40.496 { 00:06:40.496 "subsystem": "vmd", 00:06:40.496 "config": [] 00:06:40.496 }, 00:06:40.496 { 00:06:40.496 "subsystem": "accel", 00:06:40.496 "config": [ 00:06:40.496 { 00:06:40.496 "method": "accel_set_options", 00:06:40.496 "params": { 00:06:40.496 "small_cache_size": 128, 00:06:40.496 "large_cache_size": 16, 00:06:40.496 "task_count": 2048, 00:06:40.496 "sequence_count": 2048, 00:06:40.497 "buf_count": 2048 00:06:40.497 } 00:06:40.497 } 00:06:40.497 ] 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "subsystem": "bdev", 00:06:40.497 "config": [ 00:06:40.497 { 00:06:40.497 "method": "bdev_set_options", 00:06:40.497 "params": { 00:06:40.497 "bdev_io_pool_size": 65535, 00:06:40.497 "bdev_io_cache_size": 256, 00:06:40.497 "bdev_auto_examine": true, 00:06:40.497 "iobuf_small_cache_size": 128, 00:06:40.497 "iobuf_large_cache_size": 16 00:06:40.497 } 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "method": "bdev_raid_set_options", 00:06:40.497 "params": { 00:06:40.497 "process_window_size_kb": 1024 00:06:40.497 } 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "method": "bdev_iscsi_set_options", 00:06:40.497 "params": { 00:06:40.497 "timeout_sec": 30 00:06:40.497 } 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "method": "bdev_nvme_set_options", 00:06:40.497 "params": { 00:06:40.497 "action_on_timeout": "none", 00:06:40.497 "timeout_us": 0, 00:06:40.497 "timeout_admin_us": 0, 00:06:40.497 "keep_alive_timeout_ms": 10000, 00:06:40.497 "arbitration_burst": 0, 00:06:40.497 "low_priority_weight": 0, 00:06:40.497 "medium_priority_weight": 0, 00:06:40.497 "high_priority_weight": 0, 00:06:40.497 "nvme_adminq_poll_period_us": 10000, 00:06:40.497 "nvme_ioq_poll_period_us": 0, 00:06:40.497 "io_queue_requests": 0, 00:06:40.497 "delay_cmd_submit": true, 00:06:40.497 "transport_retry_count": 4, 00:06:40.497 "bdev_retry_count": 3, 00:06:40.497 "transport_ack_timeout": 0, 00:06:40.497 "ctrlr_loss_timeout_sec": 0, 00:06:40.497 "reconnect_delay_sec": 0, 00:06:40.497 "fast_io_fail_timeout_sec": 0, 00:06:40.497 "disable_auto_failback": false, 00:06:40.497 "generate_uuids": false, 00:06:40.497 "transport_tos": 0, 00:06:40.497 "nvme_error_stat": false, 00:06:40.497 "rdma_srq_size": 0, 00:06:40.497 "io_path_stat": false, 00:06:40.497 "allow_accel_sequence": false, 00:06:40.497 "rdma_max_cq_size": 0, 00:06:40.497 "rdma_cm_event_timeout_ms": 0, 00:06:40.497 "dhchap_digests": [ 00:06:40.497 "sha256", 00:06:40.497 "sha384", 00:06:40.497 "sha512" 00:06:40.497 ], 00:06:40.497 "dhchap_dhgroups": [ 00:06:40.497 "null", 00:06:40.497 "ffdhe2048", 00:06:40.497 "ffdhe3072", 00:06:40.497 "ffdhe4096", 00:06:40.497 "ffdhe6144", 00:06:40.497 "ffdhe8192" 00:06:40.497 ] 00:06:40.497 } 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "method": "bdev_nvme_set_hotplug", 00:06:40.497 "params": { 00:06:40.497 "period_us": 100000, 00:06:40.497 "enable": false 00:06:40.497 } 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "method": "bdev_wait_for_examine" 00:06:40.497 } 00:06:40.497 ] 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "subsystem": "scsi", 00:06:40.497 "config": null 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "subsystem": "scheduler", 00:06:40.497 "config": [ 00:06:40.497 { 00:06:40.497 "method": "framework_set_scheduler", 00:06:40.497 "params": { 00:06:40.497 "name": "static" 00:06:40.497 } 00:06:40.497 } 00:06:40.497 ] 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "subsystem": "vhost_scsi", 00:06:40.497 "config": [] 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "subsystem": "vhost_blk", 00:06:40.497 "config": [] 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "subsystem": "ublk", 00:06:40.497 "config": [] 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "subsystem": "nbd", 00:06:40.497 "config": [] 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "subsystem": "nvmf", 00:06:40.497 "config": [ 00:06:40.497 { 00:06:40.497 "method": "nvmf_set_config", 00:06:40.497 "params": { 00:06:40.497 "discovery_filter": "match_any", 00:06:40.497 "admin_cmd_passthru": { 00:06:40.497 "identify_ctrlr": false 00:06:40.497 } 00:06:40.497 } 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "method": "nvmf_set_max_subsystems", 00:06:40.497 "params": { 00:06:40.497 "max_subsystems": 1024 00:06:40.497 } 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "method": "nvmf_set_crdt", 00:06:40.497 "params": { 00:06:40.497 "crdt1": 0, 00:06:40.497 "crdt2": 0, 00:06:40.497 "crdt3": 0 00:06:40.497 } 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "method": "nvmf_create_transport", 00:06:40.497 "params": { 00:06:40.497 "trtype": "TCP", 00:06:40.497 "max_queue_depth": 128, 00:06:40.497 "max_io_qpairs_per_ctrlr": 127, 00:06:40.497 "in_capsule_data_size": 4096, 00:06:40.497 "max_io_size": 131072, 00:06:40.497 "io_unit_size": 131072, 00:06:40.497 "max_aq_depth": 128, 00:06:40.497 "num_shared_buffers": 511, 00:06:40.497 "buf_cache_size": 4294967295, 00:06:40.497 "dif_insert_or_strip": false, 00:06:40.497 "zcopy": false, 00:06:40.497 "c2h_success": true, 00:06:40.497 "sock_priority": 0, 00:06:40.497 "abort_timeout_sec": 1, 00:06:40.497 "ack_timeout": 0, 00:06:40.497 "data_wr_pool_size": 0 00:06:40.497 } 00:06:40.497 } 00:06:40.497 ] 00:06:40.497 }, 00:06:40.497 { 00:06:40.497 "subsystem": "iscsi", 00:06:40.497 "config": [ 00:06:40.497 { 00:06:40.497 "method": "iscsi_set_options", 00:06:40.497 "params": { 00:06:40.497 "node_base": "iqn.2016-06.io.spdk", 00:06:40.497 "max_sessions": 128, 00:06:40.497 "max_connections_per_session": 2, 00:06:40.497 "max_queue_depth": 64, 00:06:40.497 "default_time2wait": 2, 00:06:40.497 "default_time2retain": 20, 00:06:40.497 "first_burst_length": 8192, 00:06:40.497 "immediate_data": true, 00:06:40.497 "allow_duplicated_isid": false, 00:06:40.497 "error_recovery_level": 0, 00:06:40.497 "nop_timeout": 60, 00:06:40.497 "nop_in_interval": 30, 00:06:40.497 "disable_chap": false, 00:06:40.497 "require_chap": false, 00:06:40.497 "mutual_chap": false, 00:06:40.497 "chap_group": 0, 00:06:40.497 "max_large_datain_per_connection": 64, 00:06:40.497 "max_r2t_per_connection": 4, 00:06:40.497 "pdu_pool_size": 36864, 00:06:40.497 "immediate_data_pool_size": 16384, 00:06:40.497 "data_out_pool_size": 2048 00:06:40.497 } 00:06:40.497 } 00:06:40.497 ] 00:06:40.497 } 00:06:40.497 ] 00:06:40.497 } 00:06:40.497 15:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:40.497 15:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 74146 00:06:40.497 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 74146 ']' 00:06:40.497 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 74146 00:06:40.497 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:40.497 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:40.497 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74146 00:06:40.756 killing process with pid 74146 00:06:40.756 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:40.756 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:40.756 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74146' 00:06:40.756 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 74146 00:06:40.756 15:42:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 74146 00:06:41.016 15:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=74175 00:06:41.016 15:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:41.016 15:42:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:46.280 15:42:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 74175 00:06:46.280 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 74175 ']' 00:06:46.280 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 74175 00:06:46.280 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:46.280 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:46.280 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74175 00:06:46.280 killing process with pid 74175 00:06:46.280 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:46.280 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:46.280 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74175' 00:06:46.280 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 74175 00:06:46.280 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 74175 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:46.540 00:06:46.540 real 0m6.915s 00:06:46.540 user 0m6.409s 00:06:46.540 sys 0m0.749s 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.540 ************************************ 00:06:46.540 END TEST skip_rpc_with_json 00:06:46.540 ************************************ 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:46.540 15:42:21 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:46.540 15:42:21 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.540 15:42:21 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.540 15:42:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.540 ************************************ 00:06:46.540 START TEST skip_rpc_with_delay 00:06:46.540 ************************************ 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:46.540 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:46.540 [2024-07-20 15:42:21.289336] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:46.540 [2024-07-20 15:42:21.289482] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:46.800 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:46.800 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.800 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:46.800 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.800 ************************************ 00:06:46.800 END TEST skip_rpc_with_delay 00:06:46.800 ************************************ 00:06:46.800 00:06:46.800 real 0m0.166s 00:06:46.800 user 0m0.081s 00:06:46.800 sys 0m0.083s 00:06:46.800 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.800 15:42:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:46.800 15:42:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:46.800 15:42:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:46.800 15:42:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:46.800 15:42:21 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.800 15:42:21 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.800 15:42:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.800 ************************************ 00:06:46.800 START TEST exit_on_failed_rpc_init 00:06:46.800 ************************************ 00:06:46.800 15:42:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:46.800 15:42:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=74286 00:06:46.800 15:42:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.800 15:42:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 74286 00:06:46.800 15:42:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 74286 ']' 00:06:46.800 15:42:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.800 15:42:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.800 15:42:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.800 15:42:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.800 15:42:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:46.800 [2024-07-20 15:42:21.533258] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:46.800 [2024-07-20 15:42:21.533445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74286 ] 00:06:47.059 [2024-07-20 15:42:21.682720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.059 [2024-07-20 15:42:21.723864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:47.628 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:47.628 [2024-07-20 15:42:22.391102] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:47.628 [2024-07-20 15:42:22.391221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74304 ] 00:06:47.887 [2024-07-20 15:42:22.534146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.887 [2024-07-20 15:42:22.579262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.887 [2024-07-20 15:42:22.579376] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:47.887 [2024-07-20 15:42:22.579408] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:47.887 [2024-07-20 15:42:22.579442] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 74286 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 74286 ']' 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 74286 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74286 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74286' 00:06:48.147 killing process with pid 74286 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 74286 00:06:48.147 15:42:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 74286 00:06:48.405 00:06:48.405 real 0m1.656s 00:06:48.405 user 0m1.722s 00:06:48.405 sys 0m0.482s 00:06:48.405 ************************************ 00:06:48.405 END TEST exit_on_failed_rpc_init 00:06:48.405 ************************************ 00:06:48.405 15:42:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.405 15:42:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:48.405 15:42:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:48.405 00:06:48.405 real 0m14.603s 00:06:48.405 user 0m13.374s 00:06:48.405 sys 0m1.928s 00:06:48.405 15:42:23 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.405 15:42:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.405 ************************************ 00:06:48.405 END TEST skip_rpc 00:06:48.405 ************************************ 00:06:48.664 15:42:23 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:48.664 15:42:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.664 15:42:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.664 15:42:23 -- common/autotest_common.sh@10 -- # set +x 00:06:48.664 ************************************ 00:06:48.664 START TEST rpc_client 00:06:48.664 ************************************ 00:06:48.664 15:42:23 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:48.664 * Looking for test storage... 00:06:48.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:48.664 15:42:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:48.664 OK 00:06:48.664 15:42:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:48.664 00:06:48.664 real 0m0.183s 00:06:48.664 user 0m0.073s 00:06:48.664 sys 0m0.118s 00:06:48.664 15:42:23 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.664 15:42:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:48.664 ************************************ 00:06:48.664 END TEST rpc_client 00:06:48.664 ************************************ 00:06:48.664 15:42:23 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:48.664 15:42:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.664 15:42:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.664 15:42:23 -- common/autotest_common.sh@10 -- # set +x 00:06:48.923 ************************************ 00:06:48.923 START TEST json_config 00:06:48.923 ************************************ 00:06:48.923 15:42:23 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:48.923 15:42:23 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f2426c-824d-4da7-acc2-ef92aa575225 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=88f2426c-824d-4da7-acc2-ef92aa575225 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.923 15:42:23 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:48.923 15:42:23 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.923 15:42:23 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.923 15:42:23 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.924 15:42:23 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.924 15:42:23 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.924 15:42:23 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.924 15:42:23 json_config -- paths/export.sh@5 -- # export PATH 00:06:48.924 15:42:23 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.924 15:42:23 json_config -- nvmf/common.sh@47 -- # : 0 00:06:48.924 15:42:23 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.924 15:42:23 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.924 15:42:23 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.924 15:42:23 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.924 15:42:23 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.924 15:42:23 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.924 15:42:23 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.924 15:42:23 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.924 15:42:23 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:48.924 WARNING: No tests are enabled so not running JSON configuration tests 00:06:48.924 15:42:23 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:48.924 15:42:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:48.924 15:42:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:48.924 15:42:23 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:48.924 15:42:23 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:48.924 15:42:23 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:48.924 ************************************ 00:06:48.924 END TEST json_config 00:06:48.924 ************************************ 00:06:48.924 00:06:48.924 real 0m0.111s 00:06:48.924 user 0m0.060s 00:06:48.924 sys 0m0.052s 00:06:48.924 15:42:23 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.924 15:42:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.924 15:42:23 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:48.924 15:42:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.924 15:42:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.924 15:42:23 -- common/autotest_common.sh@10 -- # set +x 00:06:48.924 ************************************ 00:06:48.924 START TEST json_config_extra_key 00:06:48.924 ************************************ 00:06:48.924 15:42:23 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:49.183 15:42:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:88f2426c-824d-4da7-acc2-ef92aa575225 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=88f2426c-824d-4da7-acc2-ef92aa575225 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.183 15:42:23 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:49.183 15:42:23 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.184 15:42:23 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.184 15:42:23 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.184 15:42:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.184 15:42:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.184 15:42:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.184 15:42:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:49.184 15:42:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.184 15:42:23 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:49.184 15:42:23 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:49.184 15:42:23 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:49.184 15:42:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.184 15:42:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.184 15:42:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.184 15:42:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:49.184 15:42:23 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:49.184 15:42:23 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:49.184 15:42:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:49.184 15:42:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:49.184 15:42:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:49.184 15:42:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:49.184 15:42:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:49.184 15:42:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:49.184 15:42:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:49.184 15:42:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:49.184 15:42:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:49.184 15:42:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:49.184 15:42:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:49.184 INFO: launching applications... 00:06:49.184 15:42:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:49.184 15:42:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:49.184 15:42:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:49.184 15:42:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:49.184 15:42:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:49.184 15:42:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:49.184 15:42:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:49.184 15:42:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:49.184 15:42:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=74457 00:06:49.184 Waiting for target to run... 00:06:49.184 15:42:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:49.184 15:42:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 74457 /var/tmp/spdk_tgt.sock 00:06:49.184 15:42:23 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 74457 ']' 00:06:49.184 15:42:23 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:49.184 15:42:23 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:49.184 15:42:23 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:49.184 15:42:23 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:49.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:49.184 15:42:23 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:49.184 15:42:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:49.184 [2024-07-20 15:42:23.862330] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:49.184 [2024-07-20 15:42:23.862479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74457 ] 00:06:49.443 [2024-07-20 15:42:24.225971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.702 [2024-07-20 15:42:24.253691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.961 00:06:49.961 INFO: shutting down applications... 00:06:49.961 15:42:24 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:49.961 15:42:24 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:49.961 15:42:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:49.961 15:42:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:49.961 15:42:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:49.961 15:42:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:49.961 15:42:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:49.961 15:42:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 74457 ]] 00:06:49.961 15:42:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 74457 00:06:49.961 15:42:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:49.961 15:42:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:49.961 15:42:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74457 00:06:49.961 15:42:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:50.530 15:42:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:50.530 15:42:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.530 15:42:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74457 00:06:50.530 15:42:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:50.530 15:42:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:50.530 15:42:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:50.530 15:42:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:50.530 SPDK target shutdown done 00:06:50.530 Success 00:06:50.530 15:42:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:50.530 00:06:50.530 real 0m1.572s 00:06:50.530 user 0m1.338s 00:06:50.530 sys 0m0.456s 00:06:50.530 15:42:25 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.530 ************************************ 00:06:50.530 END TEST json_config_extra_key 00:06:50.530 ************************************ 00:06:50.530 15:42:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:50.530 15:42:25 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:50.530 15:42:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:50.530 15:42:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.530 15:42:25 -- common/autotest_common.sh@10 -- # set +x 00:06:50.530 ************************************ 00:06:50.530 START TEST alias_rpc 00:06:50.530 ************************************ 00:06:50.530 15:42:25 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:50.817 * Looking for test storage... 00:06:50.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:50.817 15:42:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:50.817 15:42:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74528 00:06:50.817 15:42:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.817 15:42:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74528 00:06:50.817 15:42:25 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 74528 ']' 00:06:50.817 15:42:25 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.817 15:42:25 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.817 15:42:25 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.817 15:42:25 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.817 15:42:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.817 [2024-07-20 15:42:25.513461] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:50.817 [2024-07-20 15:42:25.513790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74528 ] 00:06:51.095 [2024-07-20 15:42:25.666064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.095 [2024-07-20 15:42:25.710580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.666 15:42:26 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.666 15:42:26 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:51.666 15:42:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:51.924 15:42:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74528 00:06:51.924 15:42:26 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 74528 ']' 00:06:51.924 15:42:26 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 74528 00:06:51.924 15:42:26 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:51.924 15:42:26 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:51.924 15:42:26 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74528 00:06:51.924 killing process with pid 74528 00:06:51.924 15:42:26 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:51.924 15:42:26 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:51.924 15:42:26 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74528' 00:06:51.924 15:42:26 alias_rpc -- common/autotest_common.sh@965 -- # kill 74528 00:06:51.924 15:42:26 alias_rpc -- common/autotest_common.sh@970 -- # wait 74528 00:06:52.182 ************************************ 00:06:52.182 END TEST alias_rpc 00:06:52.182 ************************************ 00:06:52.182 00:06:52.182 real 0m1.641s 00:06:52.182 user 0m1.600s 00:06:52.182 sys 0m0.522s 00:06:52.182 15:42:26 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.182 15:42:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.440 15:42:26 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:52.440 15:42:26 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:52.440 15:42:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:52.440 15:42:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.440 15:42:26 -- common/autotest_common.sh@10 -- # set +x 00:06:52.440 ************************************ 00:06:52.440 START TEST spdkcli_tcp 00:06:52.440 ************************************ 00:06:52.440 15:42:26 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:52.440 * Looking for test storage... 00:06:52.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:52.440 15:42:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:52.440 15:42:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:52.440 15:42:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:52.440 15:42:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:52.440 15:42:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:52.440 15:42:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:52.440 15:42:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:52.440 15:42:27 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:52.440 15:42:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.440 15:42:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=74600 00:06:52.440 15:42:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:52.440 15:42:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 74600 00:06:52.440 15:42:27 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 74600 ']' 00:06:52.440 15:42:27 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.440 15:42:27 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:52.440 15:42:27 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.440 15:42:27 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:52.440 15:42:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.440 [2024-07-20 15:42:27.225078] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:52.440 [2024-07-20 15:42:27.225428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74600 ] 00:06:52.711 [2024-07-20 15:42:27.377288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.711 [2024-07-20 15:42:27.425440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.711 [2024-07-20 15:42:27.425864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.283 15:42:28 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:53.283 15:42:28 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:53.283 15:42:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=74616 00:06:53.283 15:42:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:53.283 15:42:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:53.541 [ 00:06:53.541 "bdev_malloc_delete", 00:06:53.541 "bdev_malloc_create", 00:06:53.541 "bdev_null_resize", 00:06:53.541 "bdev_null_delete", 00:06:53.541 "bdev_null_create", 00:06:53.541 "bdev_nvme_cuse_unregister", 00:06:53.541 "bdev_nvme_cuse_register", 00:06:53.541 "bdev_opal_new_user", 00:06:53.541 "bdev_opal_set_lock_state", 00:06:53.541 "bdev_opal_delete", 00:06:53.541 "bdev_opal_get_info", 00:06:53.541 "bdev_opal_create", 00:06:53.541 "bdev_nvme_opal_revert", 00:06:53.541 "bdev_nvme_opal_init", 00:06:53.541 "bdev_nvme_send_cmd", 00:06:53.541 "bdev_nvme_get_path_iostat", 00:06:53.541 "bdev_nvme_get_mdns_discovery_info", 00:06:53.541 "bdev_nvme_stop_mdns_discovery", 00:06:53.541 "bdev_nvme_start_mdns_discovery", 00:06:53.541 "bdev_nvme_set_multipath_policy", 00:06:53.541 "bdev_nvme_set_preferred_path", 00:06:53.541 "bdev_nvme_get_io_paths", 00:06:53.541 "bdev_nvme_remove_error_injection", 00:06:53.541 "bdev_nvme_add_error_injection", 00:06:53.541 "bdev_nvme_get_discovery_info", 00:06:53.541 "bdev_nvme_stop_discovery", 00:06:53.541 "bdev_nvme_start_discovery", 00:06:53.541 "bdev_nvme_get_controller_health_info", 00:06:53.541 "bdev_nvme_disable_controller", 00:06:53.541 "bdev_nvme_enable_controller", 00:06:53.541 "bdev_nvme_reset_controller", 00:06:53.541 "bdev_nvme_get_transport_statistics", 00:06:53.541 "bdev_nvme_apply_firmware", 00:06:53.541 "bdev_nvme_detach_controller", 00:06:53.541 "bdev_nvme_get_controllers", 00:06:53.541 "bdev_nvme_attach_controller", 00:06:53.541 "bdev_nvme_set_hotplug", 00:06:53.541 "bdev_nvme_set_options", 00:06:53.541 "bdev_passthru_delete", 00:06:53.541 "bdev_passthru_create", 00:06:53.541 "bdev_lvol_set_parent_bdev", 00:06:53.541 "bdev_lvol_set_parent", 00:06:53.541 "bdev_lvol_check_shallow_copy", 00:06:53.541 "bdev_lvol_start_shallow_copy", 00:06:53.541 "bdev_lvol_grow_lvstore", 00:06:53.541 "bdev_lvol_get_lvols", 00:06:53.541 "bdev_lvol_get_lvstores", 00:06:53.541 "bdev_lvol_delete", 00:06:53.541 "bdev_lvol_set_read_only", 00:06:53.541 "bdev_lvol_resize", 00:06:53.541 "bdev_lvol_decouple_parent", 00:06:53.541 "bdev_lvol_inflate", 00:06:53.541 "bdev_lvol_rename", 00:06:53.541 "bdev_lvol_clone_bdev", 00:06:53.541 "bdev_lvol_clone", 00:06:53.541 "bdev_lvol_snapshot", 00:06:53.541 "bdev_lvol_create", 00:06:53.541 "bdev_lvol_delete_lvstore", 00:06:53.541 "bdev_lvol_rename_lvstore", 00:06:53.541 "bdev_lvol_create_lvstore", 00:06:53.541 "bdev_raid_set_options", 00:06:53.541 "bdev_raid_remove_base_bdev", 00:06:53.541 "bdev_raid_add_base_bdev", 00:06:53.541 "bdev_raid_delete", 00:06:53.541 "bdev_raid_create", 00:06:53.541 "bdev_raid_get_bdevs", 00:06:53.541 "bdev_error_inject_error", 00:06:53.541 "bdev_error_delete", 00:06:53.541 "bdev_error_create", 00:06:53.541 "bdev_split_delete", 00:06:53.541 "bdev_split_create", 00:06:53.541 "bdev_delay_delete", 00:06:53.541 "bdev_delay_create", 00:06:53.541 "bdev_delay_update_latency", 00:06:53.541 "bdev_zone_block_delete", 00:06:53.541 "bdev_zone_block_create", 00:06:53.541 "blobfs_create", 00:06:53.541 "blobfs_detect", 00:06:53.541 "blobfs_set_cache_size", 00:06:53.541 "bdev_xnvme_delete", 00:06:53.541 "bdev_xnvme_create", 00:06:53.541 "bdev_aio_delete", 00:06:53.541 "bdev_aio_rescan", 00:06:53.541 "bdev_aio_create", 00:06:53.541 "bdev_ftl_set_property", 00:06:53.541 "bdev_ftl_get_properties", 00:06:53.541 "bdev_ftl_get_stats", 00:06:53.541 "bdev_ftl_unmap", 00:06:53.541 "bdev_ftl_unload", 00:06:53.541 "bdev_ftl_delete", 00:06:53.541 "bdev_ftl_load", 00:06:53.541 "bdev_ftl_create", 00:06:53.541 "bdev_virtio_attach_controller", 00:06:53.541 "bdev_virtio_scsi_get_devices", 00:06:53.541 "bdev_virtio_detach_controller", 00:06:53.541 "bdev_virtio_blk_set_hotplug", 00:06:53.541 "bdev_iscsi_delete", 00:06:53.541 "bdev_iscsi_create", 00:06:53.541 "bdev_iscsi_set_options", 00:06:53.541 "accel_error_inject_error", 00:06:53.541 "ioat_scan_accel_module", 00:06:53.541 "dsa_scan_accel_module", 00:06:53.541 "iaa_scan_accel_module", 00:06:53.541 "keyring_file_remove_key", 00:06:53.541 "keyring_file_add_key", 00:06:53.541 "keyring_linux_set_options", 00:06:53.541 "iscsi_get_histogram", 00:06:53.541 "iscsi_enable_histogram", 00:06:53.541 "iscsi_set_options", 00:06:53.541 "iscsi_get_auth_groups", 00:06:53.541 "iscsi_auth_group_remove_secret", 00:06:53.541 "iscsi_auth_group_add_secret", 00:06:53.541 "iscsi_delete_auth_group", 00:06:53.541 "iscsi_create_auth_group", 00:06:53.541 "iscsi_set_discovery_auth", 00:06:53.541 "iscsi_get_options", 00:06:53.541 "iscsi_target_node_request_logout", 00:06:53.541 "iscsi_target_node_set_redirect", 00:06:53.541 "iscsi_target_node_set_auth", 00:06:53.541 "iscsi_target_node_add_lun", 00:06:53.541 "iscsi_get_stats", 00:06:53.541 "iscsi_get_connections", 00:06:53.541 "iscsi_portal_group_set_auth", 00:06:53.541 "iscsi_start_portal_group", 00:06:53.541 "iscsi_delete_portal_group", 00:06:53.541 "iscsi_create_portal_group", 00:06:53.541 "iscsi_get_portal_groups", 00:06:53.541 "iscsi_delete_target_node", 00:06:53.541 "iscsi_target_node_remove_pg_ig_maps", 00:06:53.541 "iscsi_target_node_add_pg_ig_maps", 00:06:53.541 "iscsi_create_target_node", 00:06:53.541 "iscsi_get_target_nodes", 00:06:53.541 "iscsi_delete_initiator_group", 00:06:53.541 "iscsi_initiator_group_remove_initiators", 00:06:53.541 "iscsi_initiator_group_add_initiators", 00:06:53.541 "iscsi_create_initiator_group", 00:06:53.541 "iscsi_get_initiator_groups", 00:06:53.541 "nvmf_set_crdt", 00:06:53.541 "nvmf_set_config", 00:06:53.541 "nvmf_set_max_subsystems", 00:06:53.541 "nvmf_stop_mdns_prr", 00:06:53.541 "nvmf_publish_mdns_prr", 00:06:53.541 "nvmf_subsystem_get_listeners", 00:06:53.541 "nvmf_subsystem_get_qpairs", 00:06:53.541 "nvmf_subsystem_get_controllers", 00:06:53.541 "nvmf_get_stats", 00:06:53.541 "nvmf_get_transports", 00:06:53.541 "nvmf_create_transport", 00:06:53.541 "nvmf_get_targets", 00:06:53.541 "nvmf_delete_target", 00:06:53.541 "nvmf_create_target", 00:06:53.541 "nvmf_subsystem_allow_any_host", 00:06:53.541 "nvmf_subsystem_remove_host", 00:06:53.541 "nvmf_subsystem_add_host", 00:06:53.541 "nvmf_ns_remove_host", 00:06:53.541 "nvmf_ns_add_host", 00:06:53.541 "nvmf_subsystem_remove_ns", 00:06:53.541 "nvmf_subsystem_add_ns", 00:06:53.541 "nvmf_subsystem_listener_set_ana_state", 00:06:53.541 "nvmf_discovery_get_referrals", 00:06:53.541 "nvmf_discovery_remove_referral", 00:06:53.541 "nvmf_discovery_add_referral", 00:06:53.541 "nvmf_subsystem_remove_listener", 00:06:53.541 "nvmf_subsystem_add_listener", 00:06:53.541 "nvmf_delete_subsystem", 00:06:53.541 "nvmf_create_subsystem", 00:06:53.541 "nvmf_get_subsystems", 00:06:53.541 "env_dpdk_get_mem_stats", 00:06:53.541 "nbd_get_disks", 00:06:53.541 "nbd_stop_disk", 00:06:53.541 "nbd_start_disk", 00:06:53.541 "ublk_recover_disk", 00:06:53.541 "ublk_get_disks", 00:06:53.541 "ublk_stop_disk", 00:06:53.541 "ublk_start_disk", 00:06:53.541 "ublk_destroy_target", 00:06:53.541 "ublk_create_target", 00:06:53.541 "virtio_blk_create_transport", 00:06:53.541 "virtio_blk_get_transports", 00:06:53.541 "vhost_controller_set_coalescing", 00:06:53.541 "vhost_get_controllers", 00:06:53.541 "vhost_delete_controller", 00:06:53.541 "vhost_create_blk_controller", 00:06:53.541 "vhost_scsi_controller_remove_target", 00:06:53.541 "vhost_scsi_controller_add_target", 00:06:53.541 "vhost_start_scsi_controller", 00:06:53.541 "vhost_create_scsi_controller", 00:06:53.541 "thread_set_cpumask", 00:06:53.541 "framework_get_scheduler", 00:06:53.541 "framework_set_scheduler", 00:06:53.541 "framework_get_reactors", 00:06:53.541 "thread_get_io_channels", 00:06:53.541 "thread_get_pollers", 00:06:53.541 "thread_get_stats", 00:06:53.541 "framework_monitor_context_switch", 00:06:53.541 "spdk_kill_instance", 00:06:53.541 "log_enable_timestamps", 00:06:53.541 "log_get_flags", 00:06:53.541 "log_clear_flag", 00:06:53.541 "log_set_flag", 00:06:53.541 "log_get_level", 00:06:53.541 "log_set_level", 00:06:53.541 "log_get_print_level", 00:06:53.541 "log_set_print_level", 00:06:53.541 "framework_enable_cpumask_locks", 00:06:53.541 "framework_disable_cpumask_locks", 00:06:53.541 "framework_wait_init", 00:06:53.541 "framework_start_init", 00:06:53.541 "scsi_get_devices", 00:06:53.541 "bdev_get_histogram", 00:06:53.541 "bdev_enable_histogram", 00:06:53.541 "bdev_set_qos_limit", 00:06:53.542 "bdev_set_qd_sampling_period", 00:06:53.542 "bdev_get_bdevs", 00:06:53.542 "bdev_reset_iostat", 00:06:53.542 "bdev_get_iostat", 00:06:53.542 "bdev_examine", 00:06:53.542 "bdev_wait_for_examine", 00:06:53.542 "bdev_set_options", 00:06:53.542 "notify_get_notifications", 00:06:53.542 "notify_get_types", 00:06:53.542 "accel_get_stats", 00:06:53.542 "accel_set_options", 00:06:53.542 "accel_set_driver", 00:06:53.542 "accel_crypto_key_destroy", 00:06:53.542 "accel_crypto_keys_get", 00:06:53.542 "accel_crypto_key_create", 00:06:53.542 "accel_assign_opc", 00:06:53.542 "accel_get_module_info", 00:06:53.542 "accel_get_opc_assignments", 00:06:53.542 "vmd_rescan", 00:06:53.542 "vmd_remove_device", 00:06:53.542 "vmd_enable", 00:06:53.542 "sock_get_default_impl", 00:06:53.542 "sock_set_default_impl", 00:06:53.542 "sock_impl_set_options", 00:06:53.542 "sock_impl_get_options", 00:06:53.542 "iobuf_get_stats", 00:06:53.542 "iobuf_set_options", 00:06:53.542 "framework_get_pci_devices", 00:06:53.542 "framework_get_config", 00:06:53.542 "framework_get_subsystems", 00:06:53.542 "trace_get_info", 00:06:53.542 "trace_get_tpoint_group_mask", 00:06:53.542 "trace_disable_tpoint_group", 00:06:53.542 "trace_enable_tpoint_group", 00:06:53.542 "trace_clear_tpoint_mask", 00:06:53.542 "trace_set_tpoint_mask", 00:06:53.542 "keyring_get_keys", 00:06:53.542 "spdk_get_version", 00:06:53.542 "rpc_get_methods" 00:06:53.542 ] 00:06:53.542 15:42:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:53.542 15:42:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.542 15:42:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:53.542 15:42:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:53.542 15:42:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 74600 00:06:53.542 15:42:28 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 74600 ']' 00:06:53.542 15:42:28 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 74600 00:06:53.542 15:42:28 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:53.542 15:42:28 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:53.542 15:42:28 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74600 00:06:53.542 killing process with pid 74600 00:06:53.542 15:42:28 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:53.542 15:42:28 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:53.542 15:42:28 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74600' 00:06:53.542 15:42:28 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 74600 00:06:53.542 15:42:28 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 74600 00:06:54.111 ************************************ 00:06:54.111 END TEST spdkcli_tcp 00:06:54.111 ************************************ 00:06:54.111 00:06:54.111 real 0m1.657s 00:06:54.111 user 0m2.734s 00:06:54.111 sys 0m0.541s 00:06:54.111 15:42:28 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.111 15:42:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.111 15:42:28 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:54.111 15:42:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:54.111 15:42:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.111 15:42:28 -- common/autotest_common.sh@10 -- # set +x 00:06:54.111 ************************************ 00:06:54.111 START TEST dpdk_mem_utility 00:06:54.111 ************************************ 00:06:54.111 15:42:28 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:54.111 * Looking for test storage... 00:06:54.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:54.111 15:42:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:54.111 15:42:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=74686 00:06:54.111 15:42:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:54.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.111 15:42:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 74686 00:06:54.111 15:42:28 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 74686 ']' 00:06:54.111 15:42:28 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.111 15:42:28 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.111 15:42:28 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.111 15:42:28 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.111 15:42:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:54.369 [2024-07-20 15:42:28.946517] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:54.369 [2024-07-20 15:42:28.946646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74686 ] 00:06:54.369 [2024-07-20 15:42:29.095755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.369 [2024-07-20 15:42:29.139931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.306 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:55.306 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:55.306 15:42:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:55.306 15:42:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:55.306 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.306 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:55.306 { 00:06:55.306 "filename": "/tmp/spdk_mem_dump.txt" 00:06:55.306 } 00:06:55.306 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.306 15:42:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:55.306 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:55.306 1 heaps totaling size 814.000000 MiB 00:06:55.306 size: 814.000000 MiB heap id: 0 00:06:55.306 end heaps---------- 00:06:55.306 8 mempools totaling size 598.116089 MiB 00:06:55.306 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:55.306 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:55.306 size: 84.521057 MiB name: bdev_io_74686 00:06:55.306 size: 51.011292 MiB name: evtpool_74686 00:06:55.306 size: 50.003479 MiB name: msgpool_74686 00:06:55.306 size: 21.763794 MiB name: PDU_Pool 00:06:55.306 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:55.306 size: 0.026123 MiB name: Session_Pool 00:06:55.306 end mempools------- 00:06:55.306 6 memzones totaling size 4.142822 MiB 00:06:55.306 size: 1.000366 MiB name: RG_ring_0_74686 00:06:55.306 size: 1.000366 MiB name: RG_ring_1_74686 00:06:55.306 size: 1.000366 MiB name: RG_ring_4_74686 00:06:55.306 size: 1.000366 MiB name: RG_ring_5_74686 00:06:55.306 size: 0.125366 MiB name: RG_ring_2_74686 00:06:55.306 size: 0.015991 MiB name: RG_ring_3_74686 00:06:55.306 end memzones------- 00:06:55.306 15:42:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:55.306 heap id: 0 total size: 814.000000 MiB number of busy elements: 303 number of free elements: 15 00:06:55.306 list of free elements. size: 12.471375 MiB 00:06:55.306 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:55.306 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:55.306 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:55.306 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:55.306 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:55.306 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:55.306 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:55.306 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:55.306 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:55.306 element at address: 0x20001aa00000 with size: 0.568054 MiB 00:06:55.306 element at address: 0x20000b200000 with size: 0.489624 MiB 00:06:55.306 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:55.306 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:55.306 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:55.306 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:55.306 list of standard malloc elements. size: 199.266052 MiB 00:06:55.306 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:55.306 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:55.306 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:55.306 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:55.306 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:55.306 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:55.306 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:55.306 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:55.306 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:55.306 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:55.306 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:55.307 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:55.308 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:55.308 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:55.308 list of memzone associated elements. size: 602.262573 MiB 00:06:55.308 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:55.308 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:55.308 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:55.308 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:55.308 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:55.308 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_74686_0 00:06:55.308 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:55.308 associated memzone info: size: 48.002930 MiB name: MP_evtpool_74686_0 00:06:55.308 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:55.308 associated memzone info: size: 48.002930 MiB name: MP_msgpool_74686_0 00:06:55.308 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:55.308 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:55.308 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:55.308 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:55.308 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:55.308 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_74686 00:06:55.308 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:55.308 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_74686 00:06:55.308 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:55.308 associated memzone info: size: 1.007996 MiB name: MP_evtpool_74686 00:06:55.308 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:55.308 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:55.308 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:55.308 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:55.308 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:55.308 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:55.308 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:55.308 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:55.308 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:55.308 associated memzone info: size: 1.000366 MiB name: RG_ring_0_74686 00:06:55.308 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:55.308 associated memzone info: size: 1.000366 MiB name: RG_ring_1_74686 00:06:55.308 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:55.308 associated memzone info: size: 1.000366 MiB name: RG_ring_4_74686 00:06:55.308 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:55.308 associated memzone info: size: 1.000366 MiB name: RG_ring_5_74686 00:06:55.308 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:55.308 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_74686 00:06:55.308 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:55.308 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:55.308 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:55.308 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:55.308 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:55.308 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:55.308 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:55.308 associated memzone info: size: 0.125366 MiB name: RG_ring_2_74686 00:06:55.308 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:55.308 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:55.308 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:55.308 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:55.308 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:55.308 associated memzone info: size: 0.015991 MiB name: RG_ring_3_74686 00:06:55.308 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:55.308 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:55.308 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:55.308 associated memzone info: size: 0.000183 MiB name: MP_msgpool_74686 00:06:55.308 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:55.308 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_74686 00:06:55.308 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:55.309 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:55.309 15:42:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:55.309 15:42:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 74686 00:06:55.309 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 74686 ']' 00:06:55.309 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 74686 00:06:55.309 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:55.309 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:55.309 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74686 00:06:55.309 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:55.309 killing process with pid 74686 00:06:55.309 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:55.309 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74686' 00:06:55.309 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 74686 00:06:55.309 15:42:29 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 74686 00:06:55.567 ************************************ 00:06:55.567 END TEST dpdk_mem_utility 00:06:55.567 ************************************ 00:06:55.567 00:06:55.567 real 0m1.554s 00:06:55.567 user 0m1.468s 00:06:55.567 sys 0m0.513s 00:06:55.567 15:42:30 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.567 15:42:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:55.567 15:42:30 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:55.567 15:42:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:55.567 15:42:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.567 15:42:30 -- common/autotest_common.sh@10 -- # set +x 00:06:55.567 ************************************ 00:06:55.567 START TEST event 00:06:55.567 ************************************ 00:06:55.567 15:42:30 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:55.824 * Looking for test storage... 00:06:55.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:55.824 15:42:30 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:55.824 15:42:30 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:55.824 15:42:30 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:55.824 15:42:30 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:55.824 15:42:30 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.824 15:42:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.824 ************************************ 00:06:55.824 START TEST event_perf 00:06:55.824 ************************************ 00:06:55.824 15:42:30 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:55.824 Running I/O for 1 seconds...[2024-07-20 15:42:30.520599] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:55.824 [2024-07-20 15:42:30.520864] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74761 ] 00:06:56.082 [2024-07-20 15:42:30.679906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.082 [2024-07-20 15:42:30.727716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.082 [2024-07-20 15:42:30.727853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.082 [2024-07-20 15:42:30.727904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.082 [2024-07-20 15:42:30.728013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.015 Running I/O for 1 seconds... 00:06:57.015 lcore 0: 192681 00:06:57.015 lcore 1: 192680 00:06:57.015 lcore 2: 192680 00:06:57.015 lcore 3: 192680 00:06:57.015 done. 00:06:57.274 00:06:57.274 real 0m1.334s 00:06:57.274 user 0m4.093s 00:06:57.274 sys 0m0.116s 00:06:57.274 15:42:31 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.274 ************************************ 00:06:57.274 END TEST event_perf 00:06:57.274 ************************************ 00:06:57.274 15:42:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.274 15:42:31 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:57.274 15:42:31 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:57.274 15:42:31 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.274 15:42:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.274 ************************************ 00:06:57.274 START TEST event_reactor 00:06:57.274 ************************************ 00:06:57.274 15:42:31 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:57.274 [2024-07-20 15:42:31.931875] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:57.274 [2024-07-20 15:42:31.931993] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74800 ] 00:06:57.532 [2024-07-20 15:42:32.082637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.532 [2024-07-20 15:42:32.127802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.467 test_start 00:06:58.467 oneshot 00:06:58.467 tick 100 00:06:58.467 tick 100 00:06:58.467 tick 250 00:06:58.467 tick 100 00:06:58.467 tick 100 00:06:58.467 tick 100 00:06:58.467 tick 250 00:06:58.467 tick 500 00:06:58.467 tick 100 00:06:58.467 tick 100 00:06:58.467 tick 250 00:06:58.467 tick 100 00:06:58.467 tick 100 00:06:58.467 test_end 00:06:58.467 ************************************ 00:06:58.467 END TEST event_reactor 00:06:58.467 ************************************ 00:06:58.467 00:06:58.467 real 0m1.327s 00:06:58.467 user 0m1.128s 00:06:58.467 sys 0m0.092s 00:06:58.467 15:42:33 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.467 15:42:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:58.724 15:42:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:58.724 15:42:33 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:58.724 15:42:33 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.724 15:42:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.724 ************************************ 00:06:58.724 START TEST event_reactor_perf 00:06:58.724 ************************************ 00:06:58.724 15:42:33 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:58.724 [2024-07-20 15:42:33.324017] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:58.724 [2024-07-20 15:42:33.324275] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74837 ] 00:06:58.724 [2024-07-20 15:42:33.473436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.724 [2024-07-20 15:42:33.516590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.101 test_start 00:07:00.101 test_end 00:07:00.101 Performance: 375266 events per second 00:07:00.101 00:07:00.101 real 0m1.320s 00:07:00.101 user 0m1.131s 00:07:00.101 sys 0m0.081s 00:07:00.101 15:42:34 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.102 ************************************ 00:07:00.102 END TEST event_reactor_perf 00:07:00.102 ************************************ 00:07:00.102 15:42:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.102 15:42:34 event -- event/event.sh@49 -- # uname -s 00:07:00.102 15:42:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:00.102 15:42:34 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:00.102 15:42:34 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:00.102 15:42:34 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.102 15:42:34 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.102 ************************************ 00:07:00.102 START TEST event_scheduler 00:07:00.102 ************************************ 00:07:00.102 15:42:34 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:00.102 * Looking for test storage... 00:07:00.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:00.102 15:42:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:00.102 15:42:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=74898 00:07:00.102 15:42:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:00.102 15:42:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.102 15:42:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 74898 00:07:00.102 15:42:34 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 74898 ']' 00:07:00.102 15:42:34 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.102 15:42:34 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:00.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.102 15:42:34 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.102 15:42:34 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:00.102 15:42:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.102 [2024-07-20 15:42:34.889509] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:00.102 [2024-07-20 15:42:34.889644] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74898 ] 00:07:00.360 [2024-07-20 15:42:35.041931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.360 [2024-07-20 15:42:35.088681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.360 [2024-07-20 15:42:35.088880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.360 [2024-07-20 15:42:35.088788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.360 [2024-07-20 15:42:35.088980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.928 15:42:35 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:00.928 15:42:35 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:07:00.928 15:42:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:00.928 15:42:35 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.928 15:42:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.928 POWER: Env isn't set yet! 00:07:00.928 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:00.928 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:00.928 POWER: Cannot set governor of lcore 0 to userspace 00:07:00.928 POWER: Attempting to initialise PSTAT power management... 00:07:00.928 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:00.928 POWER: Cannot set governor of lcore 0 to performance 00:07:00.928 POWER: Attempting to initialise CPPC power management... 00:07:00.928 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:00.928 POWER: Cannot set governor of lcore 0 to userspace 00:07:00.928 POWER: Attempting to initialise VM power management... 00:07:00.928 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:00.928 POWER: Unable to set Power Management Environment for lcore 0 00:07:00.928 [2024-07-20 15:42:35.689865] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:07:00.928 [2024-07-20 15:42:35.689901] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:07:00.928 [2024-07-20 15:42:35.689917] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:07:00.928 [2024-07-20 15:42:35.689945] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:00.928 [2024-07-20 15:42:35.689958] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:00.928 [2024-07-20 15:42:35.689968] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:00.928 15:42:35 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.928 15:42:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:00.928 15:42:35 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.928 15:42:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 [2024-07-20 15:42:35.760203] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:01.188 15:42:35 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.188 15:42:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:01.188 15:42:35 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:01.188 15:42:35 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.188 15:42:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 ************************************ 00:07:01.188 START TEST scheduler_create_thread 00:07:01.188 ************************************ 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 2 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 3 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 4 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 5 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 6 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 7 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 8 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 9 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 10 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.188 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.568 15:42:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.569 15:42:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:02.569 15:42:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:02.569 15:42:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.569 15:42:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.508 15:42:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.508 15:42:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:03.508 15:42:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.508 15:42:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.446 15:42:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.447 15:42:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:04.447 15:42:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:04.447 15:42:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.447 15:42:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.016 15:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.016 00:07:05.016 real 0m3.878s 00:07:05.016 user 0m0.023s 00:07:05.016 sys 0m0.009s 00:07:05.016 15:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.016 ************************************ 00:07:05.016 15:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.016 END TEST scheduler_create_thread 00:07:05.016 ************************************ 00:07:05.016 15:42:39 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:05.016 15:42:39 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 74898 00:07:05.016 15:42:39 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 74898 ']' 00:07:05.016 15:42:39 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 74898 00:07:05.016 15:42:39 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:07:05.016 15:42:39 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:05.016 15:42:39 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74898 00:07:05.016 15:42:39 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:05.016 15:42:39 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:05.016 killing process with pid 74898 00:07:05.016 15:42:39 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74898' 00:07:05.016 15:42:39 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 74898 00:07:05.016 15:42:39 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 74898 00:07:05.276 [2024-07-20 15:42:40.032109] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:05.845 00:07:05.845 real 0m5.676s 00:07:05.845 user 0m11.768s 00:07:05.845 sys 0m0.440s 00:07:05.845 15:42:40 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.845 15:42:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.845 ************************************ 00:07:05.845 END TEST event_scheduler 00:07:05.845 ************************************ 00:07:05.845 15:42:40 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:05.845 15:42:40 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:05.845 15:42:40 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.845 15:42:40 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.845 15:42:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.845 ************************************ 00:07:05.845 START TEST app_repeat 00:07:05.845 ************************************ 00:07:05.845 15:42:40 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:07:05.845 15:42:40 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.845 15:42:40 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.845 15:42:40 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:05.845 15:42:40 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.845 15:42:40 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:05.845 15:42:40 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:05.845 15:42:40 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:05.845 15:42:40 event.app_repeat -- event/event.sh@19 -- # repeat_pid=75011 00:07:05.845 15:42:40 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.845 15:42:40 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:05.845 Process app_repeat pid: 75011 00:07:05.845 15:42:40 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 75011' 00:07:05.845 15:42:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:05.845 spdk_app_start Round 0 00:07:05.845 15:42:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:05.845 15:42:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75011 /var/tmp/spdk-nbd.sock 00:07:05.846 15:42:40 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75011 ']' 00:07:05.846 15:42:40 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:05.846 15:42:40 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:05.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:05.846 15:42:40 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:05.846 15:42:40 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:05.846 15:42:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:05.846 [2024-07-20 15:42:40.498437] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:05.846 [2024-07-20 15:42:40.498571] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75011 ] 00:07:06.105 [2024-07-20 15:42:40.649740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.105 [2024-07-20 15:42:40.695269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.105 [2024-07-20 15:42:40.695408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.674 15:42:41 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:06.674 15:42:41 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:06.674 15:42:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.934 Malloc0 00:07:06.934 15:42:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.934 Malloc1 00:07:07.203 15:42:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:07.203 /dev/nbd0 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.203 15:42:41 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:07.203 15:42:41 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:07.203 15:42:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:07.203 15:42:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:07.203 15:42:41 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:07.203 15:42:41 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:07.203 15:42:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:07.203 15:42:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:07.203 15:42:41 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.203 1+0 records in 00:07:07.203 1+0 records out 00:07:07.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486447 s, 8.4 MB/s 00:07:07.203 15:42:41 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.203 15:42:41 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:07.203 15:42:41 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.203 15:42:41 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:07.203 15:42:41 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.203 15:42:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:07.465 /dev/nbd1 00:07:07.465 15:42:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:07.465 15:42:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:07.465 15:42:42 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:07.465 15:42:42 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:07.465 15:42:42 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:07.465 15:42:42 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:07.465 15:42:42 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:07.465 15:42:42 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:07.465 15:42:42 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:07.465 15:42:42 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:07.465 15:42:42 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.465 1+0 records in 00:07:07.465 1+0 records out 00:07:07.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241877 s, 16.9 MB/s 00:07:07.465 15:42:42 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.465 15:42:42 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:07.465 15:42:42 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.465 15:42:42 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:07.465 15:42:42 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:07.465 15:42:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.465 15:42:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.465 15:42:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.465 15:42:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.465 15:42:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.724 { 00:07:07.724 "nbd_device": "/dev/nbd0", 00:07:07.724 "bdev_name": "Malloc0" 00:07:07.724 }, 00:07:07.724 { 00:07:07.724 "nbd_device": "/dev/nbd1", 00:07:07.724 "bdev_name": "Malloc1" 00:07:07.724 } 00:07:07.724 ]' 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.724 { 00:07:07.724 "nbd_device": "/dev/nbd0", 00:07:07.724 "bdev_name": "Malloc0" 00:07:07.724 }, 00:07:07.724 { 00:07:07.724 "nbd_device": "/dev/nbd1", 00:07:07.724 "bdev_name": "Malloc1" 00:07:07.724 } 00:07:07.724 ]' 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:07.724 /dev/nbd1' 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:07.724 /dev/nbd1' 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:07.724 256+0 records in 00:07:07.724 256+0 records out 00:07:07.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131908 s, 79.5 MB/s 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:07.724 256+0 records in 00:07:07.724 256+0 records out 00:07:07.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271034 s, 38.7 MB/s 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.724 15:42:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:07.982 256+0 records in 00:07:07.982 256+0 records out 00:07:07.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284133 s, 36.9 MB/s 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.982 15:42:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:08.240 15:42:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:08.240 15:42:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:08.240 15:42:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:08.240 15:42:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.240 15:42:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.240 15:42:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:08.240 15:42:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.240 15:42:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.240 15:42:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.240 15:42:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.240 15:42:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.498 15:42:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.498 15:42:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.498 15:42:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.498 15:42:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.498 15:42:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.498 15:42:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.498 15:42:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:08.498 15:42:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.498 15:42:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.498 15:42:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:08.498 15:42:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:08.498 15:42:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:08.498 15:42:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:08.754 15:42:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:09.011 [2024-07-20 15:42:43.583675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.011 [2024-07-20 15:42:43.625462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.011 [2024-07-20 15:42:43.625465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.011 [2024-07-20 15:42:43.668466] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:09.011 [2024-07-20 15:42:43.668535] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:12.330 15:42:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:12.330 spdk_app_start Round 1 00:07:12.330 15:42:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:12.330 15:42:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75011 /var/tmp/spdk-nbd.sock 00:07:12.330 15:42:46 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75011 ']' 00:07:12.330 15:42:46 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.330 15:42:46 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:12.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.330 15:42:46 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.330 15:42:46 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:12.330 15:42:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:12.330 15:42:46 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:12.330 15:42:46 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:12.330 15:42:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.330 Malloc0 00:07:12.330 15:42:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.330 Malloc1 00:07:12.330 15:42:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.330 15:42:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.330 15:42:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.330 15:42:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:12.330 15:42:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.330 15:42:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:12.330 15:42:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.330 15:42:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.330 15:42:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.330 15:42:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:12.330 15:42:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.330 15:42:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:12.330 15:42:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:12.330 15:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:12.330 15:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.330 15:42:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:12.588 /dev/nbd0 00:07:12.588 15:42:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.588 15:42:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.588 15:42:47 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:12.588 15:42:47 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:12.588 15:42:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:12.588 15:42:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:12.588 15:42:47 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:12.588 15:42:47 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:12.588 15:42:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:12.588 15:42:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:12.588 15:42:47 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.588 1+0 records in 00:07:12.588 1+0 records out 00:07:12.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391599 s, 10.5 MB/s 00:07:12.588 15:42:47 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.588 15:42:47 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:12.588 15:42:47 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.588 15:42:47 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:12.588 15:42:47 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:12.588 15:42:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.588 15:42:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.588 15:42:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:12.847 /dev/nbd1 00:07:12.847 15:42:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:12.847 15:42:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:12.847 15:42:47 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:12.847 15:42:47 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:12.847 15:42:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:12.847 15:42:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:12.847 15:42:47 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:12.847 15:42:47 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:12.847 15:42:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:12.847 15:42:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:12.847 15:42:47 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.847 1+0 records in 00:07:12.847 1+0 records out 00:07:12.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358389 s, 11.4 MB/s 00:07:12.847 15:42:47 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.847 15:42:47 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:12.847 15:42:47 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.847 15:42:47 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:12.847 15:42:47 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:12.847 15:42:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.847 15:42:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.847 15:42:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.847 15:42:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.847 15:42:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.105 { 00:07:13.105 "nbd_device": "/dev/nbd0", 00:07:13.105 "bdev_name": "Malloc0" 00:07:13.105 }, 00:07:13.105 { 00:07:13.105 "nbd_device": "/dev/nbd1", 00:07:13.105 "bdev_name": "Malloc1" 00:07:13.105 } 00:07:13.105 ]' 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.105 { 00:07:13.105 "nbd_device": "/dev/nbd0", 00:07:13.105 "bdev_name": "Malloc0" 00:07:13.105 }, 00:07:13.105 { 00:07:13.105 "nbd_device": "/dev/nbd1", 00:07:13.105 "bdev_name": "Malloc1" 00:07:13.105 } 00:07:13.105 ]' 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:13.105 /dev/nbd1' 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:13.105 /dev/nbd1' 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:13.105 256+0 records in 00:07:13.105 256+0 records out 00:07:13.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120166 s, 87.3 MB/s 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:13.105 256+0 records in 00:07:13.105 256+0 records out 00:07:13.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262906 s, 39.9 MB/s 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:13.105 256+0 records in 00:07:13.105 256+0 records out 00:07:13.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028986 s, 36.2 MB/s 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.105 15:42:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.364 15:42:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.364 15:42:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.364 15:42:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.364 15:42:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.364 15:42:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.364 15:42:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.364 15:42:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.364 15:42:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.364 15:42:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.364 15:42:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.622 15:42:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:13.622 15:42:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:13.622 15:42:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:13.622 15:42:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.622 15:42:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.622 15:42:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:13.622 15:42:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.622 15:42:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.622 15:42:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.622 15:42:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.622 15:42:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.622 15:42:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.622 15:42:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.622 15:42:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.881 15:42:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.881 15:42:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.881 15:42:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.881 15:42:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:13.881 15:42:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.881 15:42:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.881 15:42:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.881 15:42:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.881 15:42:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.881 15:42:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:13.881 15:42:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:14.139 [2024-07-20 15:42:48.803364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.139 [2024-07-20 15:42:48.842868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.139 [2024-07-20 15:42:48.842889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.139 [2024-07-20 15:42:48.885586] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:14.139 [2024-07-20 15:42:48.885654] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:17.424 15:42:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:17.424 spdk_app_start Round 2 00:07:17.424 15:42:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:17.424 15:42:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75011 /var/tmp/spdk-nbd.sock 00:07:17.424 15:42:51 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75011 ']' 00:07:17.424 15:42:51 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:17.424 15:42:51 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:17.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:17.424 15:42:51 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:17.424 15:42:51 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:17.424 15:42:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.424 15:42:51 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:17.424 15:42:51 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:17.424 15:42:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.424 Malloc0 00:07:17.424 15:42:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.424 Malloc1 00:07:17.683 15:42:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:17.683 /dev/nbd0 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.683 15:42:52 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:17.683 15:42:52 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:17.683 15:42:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:17.683 15:42:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:17.683 15:42:52 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:17.683 15:42:52 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:17.683 15:42:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:17.683 15:42:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:17.683 15:42:52 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.683 1+0 records in 00:07:17.683 1+0 records out 00:07:17.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251658 s, 16.3 MB/s 00:07:17.683 15:42:52 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.683 15:42:52 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:17.683 15:42:52 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.683 15:42:52 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:17.683 15:42:52 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.683 15:42:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:17.941 /dev/nbd1 00:07:17.941 15:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:17.941 15:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:17.941 15:42:52 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:17.941 15:42:52 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:17.941 15:42:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:17.941 15:42:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:17.941 15:42:52 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:17.941 15:42:52 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:17.941 15:42:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:17.941 15:42:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:17.941 15:42:52 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.941 1+0 records in 00:07:17.941 1+0 records out 00:07:17.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355844 s, 11.5 MB/s 00:07:17.941 15:42:52 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.942 15:42:52 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:17.942 15:42:52 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.942 15:42:52 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:17.942 15:42:52 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:17.942 15:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.942 15:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.942 15:42:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.942 15:42:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.942 15:42:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.206 { 00:07:18.206 "nbd_device": "/dev/nbd0", 00:07:18.206 "bdev_name": "Malloc0" 00:07:18.206 }, 00:07:18.206 { 00:07:18.206 "nbd_device": "/dev/nbd1", 00:07:18.206 "bdev_name": "Malloc1" 00:07:18.206 } 00:07:18.206 ]' 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.206 { 00:07:18.206 "nbd_device": "/dev/nbd0", 00:07:18.206 "bdev_name": "Malloc0" 00:07:18.206 }, 00:07:18.206 { 00:07:18.206 "nbd_device": "/dev/nbd1", 00:07:18.206 "bdev_name": "Malloc1" 00:07:18.206 } 00:07:18.206 ]' 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.206 /dev/nbd1' 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.206 /dev/nbd1' 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:18.206 256+0 records in 00:07:18.206 256+0 records out 00:07:18.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112026 s, 93.6 MB/s 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.206 256+0 records in 00:07:18.206 256+0 records out 00:07:18.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217769 s, 48.2 MB/s 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.206 256+0 records in 00:07:18.206 256+0 records out 00:07:18.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270658 s, 38.7 MB/s 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.206 15:42:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.464 15:42:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.722 15:42:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.722 15:42:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.722 15:42:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.722 15:42:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.722 15:42:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.722 15:42:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.722 15:42:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.722 15:42:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.722 15:42:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.722 15:42:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.723 15:42:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.980 15:42:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.980 15:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.980 15:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.980 15:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.980 15:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.980 15:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.980 15:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:18.980 15:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.980 15:42:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.980 15:42:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:18.980 15:42:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:18.980 15:42:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:18.980 15:42:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:19.238 15:42:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:19.496 [2024-07-20 15:42:54.047762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.496 [2024-07-20 15:42:54.086187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.496 [2024-07-20 15:42:54.086191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.496 [2024-07-20 15:42:54.128865] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:19.496 [2024-07-20 15:42:54.128927] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:22.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.780 15:42:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 75011 /var/tmp/spdk-nbd.sock 00:07:22.780 15:42:56 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75011 ']' 00:07:22.780 15:42:56 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.780 15:42:56 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:22.780 15:42:56 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.780 15:42:56 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:22.780 15:42:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:22.780 15:42:57 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:22.780 15:42:57 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:22.780 15:42:57 event.app_repeat -- event/event.sh@39 -- # killprocess 75011 00:07:22.780 15:42:57 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 75011 ']' 00:07:22.780 15:42:57 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 75011 00:07:22.780 15:42:57 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:07:22.780 15:42:57 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:22.781 15:42:57 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75011 00:07:22.781 killing process with pid 75011 00:07:22.781 15:42:57 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:22.781 15:42:57 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:22.781 15:42:57 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75011' 00:07:22.781 15:42:57 event.app_repeat -- common/autotest_common.sh@965 -- # kill 75011 00:07:22.781 15:42:57 event.app_repeat -- common/autotest_common.sh@970 -- # wait 75011 00:07:22.781 spdk_app_start is called in Round 0. 00:07:22.781 Shutdown signal received, stop current app iteration 00:07:22.781 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:07:22.781 spdk_app_start is called in Round 1. 00:07:22.781 Shutdown signal received, stop current app iteration 00:07:22.781 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:07:22.781 spdk_app_start is called in Round 2. 00:07:22.781 Shutdown signal received, stop current app iteration 00:07:22.781 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:07:22.781 spdk_app_start is called in Round 3. 00:07:22.781 Shutdown signal received, stop current app iteration 00:07:22.781 15:42:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:22.781 15:42:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:22.781 ************************************ 00:07:22.781 END TEST app_repeat 00:07:22.781 ************************************ 00:07:22.781 00:07:22.781 real 0m16.899s 00:07:22.781 user 0m36.523s 00:07:22.781 sys 0m2.934s 00:07:22.781 15:42:57 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.781 15:42:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:22.781 15:42:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:22.781 15:42:57 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:22.781 15:42:57 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:22.781 15:42:57 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.781 15:42:57 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.781 ************************************ 00:07:22.781 START TEST cpu_locks 00:07:22.781 ************************************ 00:07:22.781 15:42:57 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:22.781 * Looking for test storage... 00:07:22.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:22.781 15:42:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:22.781 15:42:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:22.781 15:42:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:22.781 15:42:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:22.781 15:42:57 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:22.781 15:42:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.781 15:42:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.781 ************************************ 00:07:22.781 START TEST default_locks 00:07:22.781 ************************************ 00:07:22.781 15:42:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:07:22.781 15:42:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75421 00:07:22.781 15:42:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 75421 00:07:22.781 15:42:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.781 15:42:57 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 75421 ']' 00:07:22.781 15:42:57 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.781 15:42:57 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:22.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.781 15:42:57 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.781 15:42:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:22.781 15:42:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.040 [2024-07-20 15:42:57.647238] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:23.040 [2024-07-20 15:42:57.647403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75421 ] 00:07:23.040 [2024-07-20 15:42:57.798727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.300 [2024-07-20 15:42:57.843485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.868 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:23.868 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:07:23.868 15:42:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 75421 00:07:23.868 15:42:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 75421 00:07:23.868 15:42:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.127 15:42:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 75421 00:07:24.127 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 75421 ']' 00:07:24.127 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 75421 00:07:24.127 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:07:24.127 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:24.127 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75421 00:07:24.127 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:24.127 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:24.127 killing process with pid 75421 00:07:24.127 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75421' 00:07:24.127 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 75421 00:07:24.127 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 75421 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75421 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75421 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 75421 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 75421 ']' 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:24.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.697 ERROR: process (pid: 75421) is no longer running 00:07:24.697 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75421) - No such process 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:24.697 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:07:24.698 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:24.698 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:24.698 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:24.698 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:24.698 15:42:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:24.698 15:42:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:24.698 15:42:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:24.698 15:42:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:24.698 00:07:24.698 real 0m1.711s 00:07:24.698 user 0m1.679s 00:07:24.698 sys 0m0.591s 00:07:24.698 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.698 ************************************ 00:07:24.698 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.698 END TEST default_locks 00:07:24.698 ************************************ 00:07:24.698 15:42:59 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:24.698 15:42:59 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:24.698 15:42:59 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.698 15:42:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.698 ************************************ 00:07:24.698 START TEST default_locks_via_rpc 00:07:24.698 ************************************ 00:07:24.698 15:42:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:07:24.698 15:42:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:24.698 15:42:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75474 00:07:24.698 15:42:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 75474 00:07:24.698 15:42:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75474 ']' 00:07:24.698 15:42:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.698 15:42:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:24.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.698 15:42:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.698 15:42:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:24.698 15:42:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.698 [2024-07-20 15:42:59.436210] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:24.698 [2024-07-20 15:42:59.436412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75474 ] 00:07:24.968 [2024-07-20 15:42:59.595009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.968 [2024-07-20 15:42:59.639467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 75474 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 75474 00:07:25.560 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:26.128 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 75474 00:07:26.128 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 75474 ']' 00:07:26.128 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 75474 00:07:26.128 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:07:26.128 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:26.128 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75474 00:07:26.128 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:26.128 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:26.128 killing process with pid 75474 00:07:26.128 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75474' 00:07:26.128 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 75474 00:07:26.128 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 75474 00:07:26.387 00:07:26.387 real 0m1.701s 00:07:26.387 user 0m1.648s 00:07:26.387 sys 0m0.607s 00:07:26.387 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.387 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.387 ************************************ 00:07:26.387 END TEST default_locks_via_rpc 00:07:26.387 ************************************ 00:07:26.387 15:43:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:26.387 15:43:01 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:26.387 15:43:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.387 15:43:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.387 ************************************ 00:07:26.387 START TEST non_locking_app_on_locked_coremask 00:07:26.387 ************************************ 00:07:26.387 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:07:26.387 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75521 00:07:26.387 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 75521 /var/tmp/spdk.sock 00:07:26.387 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.387 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75521 ']' 00:07:26.387 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.387 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:26.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.387 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.387 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:26.387 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.646 [2024-07-20 15:43:01.193489] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:26.646 [2024-07-20 15:43:01.193624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75521 ] 00:07:26.646 [2024-07-20 15:43:01.344819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.646 [2024-07-20 15:43:01.388749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.213 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:27.213 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:27.213 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:27.213 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75537 00:07:27.213 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 75537 /var/tmp/spdk2.sock 00:07:27.213 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75537 ']' 00:07:27.213 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.213 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:27.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.213 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.213 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:27.213 15:43:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.472 [2024-07-20 15:43:02.062518] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:27.472 [2024-07-20 15:43:02.062652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75537 ] 00:07:27.472 [2024-07-20 15:43:02.207707] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:27.472 [2024-07-20 15:43:02.207763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.730 [2024-07-20 15:43:02.299635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.297 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:28.297 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:28.297 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 75521 00:07:28.297 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75521 00:07:28.297 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.862 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 75521 00:07:28.862 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75521 ']' 00:07:28.862 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75521 00:07:28.862 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:28.862 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:28.862 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75521 00:07:29.121 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:29.121 killing process with pid 75521 00:07:29.121 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:29.121 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75521' 00:07:29.121 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75521 00:07:29.121 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75521 00:07:29.689 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 75537 00:07:29.689 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75537 ']' 00:07:29.689 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75537 00:07:29.689 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:29.689 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:29.689 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75537 00:07:29.689 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:29.689 killing process with pid 75537 00:07:29.689 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:29.689 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75537' 00:07:29.689 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75537 00:07:29.689 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75537 00:07:30.256 00:07:30.256 real 0m3.713s 00:07:30.256 user 0m3.883s 00:07:30.256 sys 0m1.151s 00:07:30.256 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.256 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.256 ************************************ 00:07:30.256 END TEST non_locking_app_on_locked_coremask 00:07:30.256 ************************************ 00:07:30.256 15:43:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:30.256 15:43:04 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:30.256 15:43:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.256 15:43:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.256 ************************************ 00:07:30.256 START TEST locking_app_on_unlocked_coremask 00:07:30.256 ************************************ 00:07:30.256 15:43:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:30.256 15:43:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75600 00:07:30.256 15:43:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 75600 /var/tmp/spdk.sock 00:07:30.256 15:43:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:30.256 15:43:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75600 ']' 00:07:30.256 15:43:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.256 15:43:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:30.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.256 15:43:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.256 15:43:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:30.256 15:43:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.256 [2024-07-20 15:43:04.985658] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:30.256 [2024-07-20 15:43:04.985796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75600 ] 00:07:30.515 [2024-07-20 15:43:05.133731] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:30.515 [2024-07-20 15:43:05.133811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.515 [2024-07-20 15:43:05.178552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.082 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:31.082 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:31.082 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:31.082 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=75616 00:07:31.082 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 75616 /var/tmp/spdk2.sock 00:07:31.082 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75616 ']' 00:07:31.082 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.082 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:31.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.082 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.082 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:31.082 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.082 [2024-07-20 15:43:05.839019] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:31.082 [2024-07-20 15:43:05.839153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75616 ] 00:07:31.341 [2024-07-20 15:43:05.988865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.341 [2024-07-20 15:43:06.078453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.908 15:43:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:31.908 15:43:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:31.908 15:43:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 75616 00:07:31.908 15:43:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75616 00:07:31.908 15:43:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:32.846 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 75600 00:07:32.846 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75600 ']' 00:07:32.846 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 75600 00:07:32.846 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:32.846 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:32.846 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75600 00:07:32.846 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:32.846 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:32.846 killing process with pid 75600 00:07:32.846 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75600' 00:07:32.846 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 75600 00:07:32.846 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 75600 00:07:33.805 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 75616 00:07:33.805 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75616 ']' 00:07:33.805 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 75616 00:07:33.805 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:33.805 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:33.805 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75616 00:07:33.805 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:33.805 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:33.805 killing process with pid 75616 00:07:33.805 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75616' 00:07:33.805 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 75616 00:07:33.805 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 75616 00:07:34.064 00:07:34.064 real 0m3.788s 00:07:34.064 user 0m3.962s 00:07:34.064 sys 0m1.175s 00:07:34.064 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.064 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.064 ************************************ 00:07:34.064 END TEST locking_app_on_unlocked_coremask 00:07:34.064 ************************************ 00:07:34.064 15:43:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:34.064 15:43:08 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:34.064 15:43:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.064 15:43:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.064 ************************************ 00:07:34.065 START TEST locking_app_on_locked_coremask 00:07:34.065 ************************************ 00:07:34.065 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:34.065 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=75685 00:07:34.065 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 75685 /var/tmp/spdk.sock 00:07:34.065 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75685 ']' 00:07:34.065 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.065 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:34.065 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.065 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:34.065 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.065 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:34.065 [2024-07-20 15:43:08.849692] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:34.065 [2024-07-20 15:43:08.849841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75685 ] 00:07:34.323 [2024-07-20 15:43:08.986412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.323 [2024-07-20 15:43:09.031495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=75696 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 75696 /var/tmp/spdk2.sock 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75696 /var/tmp/spdk2.sock 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75696 /var/tmp/spdk2.sock 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75696 ']' 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:34.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:34.891 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.149 [2024-07-20 15:43:09.731479] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:35.149 [2024-07-20 15:43:09.731601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75696 ] 00:07:35.149 [2024-07-20 15:43:09.880709] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 75685 has claimed it. 00:07:35.149 [2024-07-20 15:43:09.880789] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:35.715 ERROR: process (pid: 75696) is no longer running 00:07:35.715 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75696) - No such process 00:07:35.715 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:35.715 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:35.715 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:35.715 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.715 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:35.715 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.715 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 75685 00:07:35.715 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75685 00:07:35.715 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:36.282 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 75685 00:07:36.282 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75685 ']' 00:07:36.282 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75685 00:07:36.282 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:36.282 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:36.282 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75685 00:07:36.282 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:36.282 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:36.282 killing process with pid 75685 00:07:36.282 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75685' 00:07:36.282 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75685 00:07:36.282 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75685 00:07:36.540 00:07:36.540 real 0m2.479s 00:07:36.540 user 0m2.643s 00:07:36.540 sys 0m0.765s 00:07:36.540 15:43:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.540 15:43:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.540 ************************************ 00:07:36.540 END TEST locking_app_on_locked_coremask 00:07:36.540 ************************************ 00:07:36.540 15:43:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:36.540 15:43:11 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:36.540 15:43:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.540 15:43:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.540 ************************************ 00:07:36.540 START TEST locking_overlapped_coremask 00:07:36.540 ************************************ 00:07:36.540 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:36.540 15:43:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=75749 00:07:36.540 15:43:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 75749 /var/tmp/spdk.sock 00:07:36.540 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 75749 ']' 00:07:36.540 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.540 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:36.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.540 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.540 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:36.540 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.540 15:43:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:36.801 [2024-07-20 15:43:11.399934] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:36.801 [2024-07-20 15:43:11.400075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75749 ] 00:07:36.801 [2024-07-20 15:43:11.553611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.059 [2024-07-20 15:43:11.601946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.059 [2024-07-20 15:43:11.602033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.059 [2024-07-20 15:43:11.602145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=75761 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 75761 /var/tmp/spdk2.sock 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75761 /var/tmp/spdk2.sock 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75761 /var/tmp/spdk2.sock 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 75761 ']' 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:37.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:37.629 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.629 [2024-07-20 15:43:12.251189] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:37.629 [2024-07-20 15:43:12.251306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75761 ] 00:07:37.629 [2024-07-20 15:43:12.399694] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75749 has claimed it. 00:07:37.629 [2024-07-20 15:43:12.399769] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:38.204 ERROR: process (pid: 75761) is no longer running 00:07:38.204 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75761) - No such process 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 75749 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 75749 ']' 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 75749 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75749 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:38.204 killing process with pid 75749 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75749' 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 75749 00:07:38.204 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 75749 00:07:38.781 00:07:38.781 real 0m1.989s 00:07:38.781 user 0m5.144s 00:07:38.781 sys 0m0.538s 00:07:38.781 15:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.781 15:43:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.781 ************************************ 00:07:38.781 END TEST locking_overlapped_coremask 00:07:38.781 ************************************ 00:07:38.781 15:43:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:38.781 15:43:13 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:38.781 15:43:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.781 15:43:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.781 ************************************ 00:07:38.781 START TEST locking_overlapped_coremask_via_rpc 00:07:38.781 ************************************ 00:07:38.781 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:38.781 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=75809 00:07:38.781 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 75809 /var/tmp/spdk.sock 00:07:38.781 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:38.781 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75809 ']' 00:07:38.781 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.781 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:38.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.781 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.781 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:38.781 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.781 [2024-07-20 15:43:13.461819] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:38.781 [2024-07-20 15:43:13.461945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75809 ] 00:07:39.040 [2024-07-20 15:43:13.612351] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:39.040 [2024-07-20 15:43:13.612410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.040 [2024-07-20 15:43:13.655666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.040 [2024-07-20 15:43:13.655777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.040 [2024-07-20 15:43:13.655679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:39.609 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:39.609 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:39.609 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=75821 00:07:39.609 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 75821 /var/tmp/spdk2.sock 00:07:39.609 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:39.609 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75821 ']' 00:07:39.609 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:39.609 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:39.609 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:39.609 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:39.609 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.609 [2024-07-20 15:43:14.335287] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:39.609 [2024-07-20 15:43:14.335597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75821 ] 00:07:39.868 [2024-07-20 15:43:14.483537] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:39.868 [2024-07-20 15:43:14.483582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.868 [2024-07-20 15:43:14.572543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.868 [2024-07-20 15:43:14.572563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.868 [2024-07-20 15:43:14.572676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.438 [2024-07-20 15:43:15.122534] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75809 has claimed it. 00:07:40.438 request: 00:07:40.438 { 00:07:40.438 "method": "framework_enable_cpumask_locks", 00:07:40.438 "req_id": 1 00:07:40.438 } 00:07:40.438 Got JSON-RPC error response 00:07:40.438 response: 00:07:40.438 { 00:07:40.438 "code": -32603, 00:07:40.438 "message": "Failed to claim CPU core: 2" 00:07:40.438 } 00:07:40.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:40.438 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:40.439 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:40.439 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:40.439 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 75809 /var/tmp/spdk.sock 00:07:40.439 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75809 ']' 00:07:40.439 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.439 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:40.439 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.439 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:40.439 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.698 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:40.698 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:40.698 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 75821 /var/tmp/spdk2.sock 00:07:40.698 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75821 ']' 00:07:40.698 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:40.698 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:40.698 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:40.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:40.698 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:40.698 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.958 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:40.958 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:40.958 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:40.958 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:40.958 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:40.958 ************************************ 00:07:40.958 END TEST locking_overlapped_coremask_via_rpc 00:07:40.958 ************************************ 00:07:40.958 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:40.958 00:07:40.958 real 0m2.153s 00:07:40.958 user 0m0.890s 00:07:40.958 sys 0m0.193s 00:07:40.958 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.958 15:43:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.958 15:43:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:40.958 15:43:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75809 ]] 00:07:40.959 15:43:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75809 00:07:40.959 15:43:15 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75809 ']' 00:07:40.959 15:43:15 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75809 00:07:40.959 15:43:15 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:40.959 15:43:15 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:40.959 15:43:15 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75809 00:07:40.959 killing process with pid 75809 00:07:40.959 15:43:15 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:40.959 15:43:15 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:40.959 15:43:15 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75809' 00:07:40.959 15:43:15 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 75809 00:07:40.959 15:43:15 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 75809 00:07:41.218 15:43:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75821 ]] 00:07:41.218 15:43:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75821 00:07:41.218 15:43:16 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75821 ']' 00:07:41.218 15:43:16 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75821 00:07:41.218 15:43:16 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:41.477 15:43:16 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:41.477 15:43:16 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75821 00:07:41.477 killing process with pid 75821 00:07:41.477 15:43:16 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:41.477 15:43:16 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:41.477 15:43:16 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75821' 00:07:41.477 15:43:16 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 75821 00:07:41.477 15:43:16 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 75821 00:07:41.736 15:43:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:41.736 Process with pid 75809 is not found 00:07:41.736 Process with pid 75821 is not found 00:07:41.736 15:43:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:41.736 15:43:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75809 ]] 00:07:41.736 15:43:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75809 00:07:41.736 15:43:16 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75809 ']' 00:07:41.736 15:43:16 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75809 00:07:41.736 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (75809) - No such process 00:07:41.736 15:43:16 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 75809 is not found' 00:07:41.736 15:43:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75821 ]] 00:07:41.736 15:43:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75821 00:07:41.736 15:43:16 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75821 ']' 00:07:41.736 15:43:16 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75821 00:07:41.736 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (75821) - No such process 00:07:41.736 15:43:16 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 75821 is not found' 00:07:41.736 15:43:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:41.736 00:07:41.736 real 0m19.026s 00:07:41.736 user 0m30.465s 00:07:41.736 sys 0m6.133s 00:07:41.736 15:43:16 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.736 15:43:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.736 ************************************ 00:07:41.736 END TEST cpu_locks 00:07:41.736 ************************************ 00:07:41.736 00:07:41.736 real 0m46.148s 00:07:41.736 user 1m25.293s 00:07:41.736 sys 0m10.165s 00:07:41.736 15:43:16 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.736 15:43:16 event -- common/autotest_common.sh@10 -- # set +x 00:07:41.736 ************************************ 00:07:41.736 END TEST event 00:07:41.736 ************************************ 00:07:41.996 15:43:16 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:41.996 15:43:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:41.996 15:43:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.996 15:43:16 -- common/autotest_common.sh@10 -- # set +x 00:07:41.996 ************************************ 00:07:41.996 START TEST thread 00:07:41.996 ************************************ 00:07:41.996 15:43:16 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:41.996 * Looking for test storage... 00:07:41.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:41.996 15:43:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:41.996 15:43:16 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:41.996 15:43:16 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.996 15:43:16 thread -- common/autotest_common.sh@10 -- # set +x 00:07:41.996 ************************************ 00:07:41.996 START TEST thread_poller_perf 00:07:41.996 ************************************ 00:07:41.996 15:43:16 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:41.996 [2024-07-20 15:43:16.734485] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:41.996 [2024-07-20 15:43:16.734751] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75946 ] 00:07:42.254 [2024-07-20 15:43:16.886050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.254 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:42.254 [2024-07-20 15:43:16.928106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.635 ====================================== 00:07:43.635 busy:2502787018 (cyc) 00:07:43.635 total_run_count: 407000 00:07:43.635 tsc_hz: 2490000000 (cyc) 00:07:43.635 ====================================== 00:07:43.635 poller_cost: 6149 (cyc), 2469 (nsec) 00:07:43.635 ************************************ 00:07:43.635 END TEST thread_poller_perf 00:07:43.635 ************************************ 00:07:43.635 00:07:43.635 real 0m1.333s 00:07:43.635 user 0m1.126s 00:07:43.635 sys 0m0.100s 00:07:43.635 15:43:18 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.635 15:43:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:43.635 15:43:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:43.635 15:43:18 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:43.635 15:43:18 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.635 15:43:18 thread -- common/autotest_common.sh@10 -- # set +x 00:07:43.635 ************************************ 00:07:43.635 START TEST thread_poller_perf 00:07:43.635 ************************************ 00:07:43.635 15:43:18 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:43.635 [2024-07-20 15:43:18.121484] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:43.635 [2024-07-20 15:43:18.121642] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75977 ] 00:07:43.635 [2024-07-20 15:43:18.276989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.635 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:43.635 [2024-07-20 15:43:18.321265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.014 ====================================== 00:07:45.015 busy:2493971034 (cyc) 00:07:45.015 total_run_count: 5369000 00:07:45.015 tsc_hz: 2490000000 (cyc) 00:07:45.015 ====================================== 00:07:45.015 poller_cost: 464 (cyc), 186 (nsec) 00:07:45.015 ************************************ 00:07:45.015 END TEST thread_poller_perf 00:07:45.015 ************************************ 00:07:45.015 00:07:45.015 real 0m1.323s 00:07:45.015 user 0m1.119s 00:07:45.015 sys 0m0.097s 00:07:45.015 15:43:19 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.015 15:43:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:45.015 15:43:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:45.015 ************************************ 00:07:45.015 END TEST thread 00:07:45.015 ************************************ 00:07:45.015 00:07:45.015 real 0m2.905s 00:07:45.015 user 0m2.338s 00:07:45.015 sys 0m0.354s 00:07:45.015 15:43:19 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.015 15:43:19 thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.015 15:43:19 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:45.015 15:43:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:45.015 15:43:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.015 15:43:19 -- common/autotest_common.sh@10 -- # set +x 00:07:45.015 ************************************ 00:07:45.015 START TEST accel 00:07:45.015 ************************************ 00:07:45.015 15:43:19 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:45.015 * Looking for test storage... 00:07:45.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:45.015 15:43:19 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:45.015 15:43:19 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:45.015 15:43:19 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:45.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.015 15:43:19 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=76058 00:07:45.015 15:43:19 accel -- accel/accel.sh@63 -- # waitforlisten 76058 00:07:45.015 15:43:19 accel -- common/autotest_common.sh@827 -- # '[' -z 76058 ']' 00:07:45.015 15:43:19 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.015 15:43:19 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:45.015 15:43:19 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.015 15:43:19 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:45.015 15:43:19 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:45.015 15:43:19 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:45.015 15:43:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.015 15:43:19 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.015 15:43:19 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.015 15:43:19 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.015 15:43:19 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.015 15:43:19 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.015 15:43:19 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:45.015 15:43:19 accel -- accel/accel.sh@41 -- # jq -r . 00:07:45.015 [2024-07-20 15:43:19.756975] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:45.015 [2024-07-20 15:43:19.757129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76058 ] 00:07:45.276 [2024-07-20 15:43:19.900891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.276 [2024-07-20 15:43:19.945232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@860 -- # return 0 00:07:45.845 15:43:20 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:45.845 15:43:20 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:45.845 15:43:20 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:45.845 15:43:20 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:45.845 15:43:20 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:45.845 15:43:20 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.845 15:43:20 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # IFS== 00:07:45.845 15:43:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:45.845 15:43:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:45.845 15:43:20 accel -- accel/accel.sh@75 -- # killprocess 76058 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@946 -- # '[' -z 76058 ']' 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@950 -- # kill -0 76058 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@951 -- # uname 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76058 00:07:45.845 killing process with pid 76058 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76058' 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@965 -- # kill 76058 00:07:45.845 15:43:20 accel -- common/autotest_common.sh@970 -- # wait 76058 00:07:46.414 15:43:20 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:46.414 15:43:20 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:46.414 15:43:20 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:46.414 15:43:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.414 15:43:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.414 15:43:20 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:46.414 15:43:20 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:46.414 15:43:20 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:46.414 15:43:21 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.414 15:43:21 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.414 15:43:21 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.414 15:43:21 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.414 15:43:21 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.414 15:43:21 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:46.414 15:43:21 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:46.414 15:43:21 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.414 15:43:21 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:46.414 15:43:21 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:46.414 15:43:21 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:46.414 15:43:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.414 15:43:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.414 ************************************ 00:07:46.414 START TEST accel_missing_filename 00:07:46.414 ************************************ 00:07:46.414 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:46.414 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:46.414 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:46.414 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:46.414 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.414 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:46.414 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.414 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:46.414 15:43:21 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:46.414 15:43:21 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:46.414 15:43:21 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.414 15:43:21 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.414 15:43:21 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.414 15:43:21 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.414 15:43:21 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.414 15:43:21 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:46.414 15:43:21 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:46.414 [2024-07-20 15:43:21.176614] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:46.414 [2024-07-20 15:43:21.176735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76106 ] 00:07:46.674 [2024-07-20 15:43:21.329257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.674 [2024-07-20 15:43:21.374217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.674 [2024-07-20 15:43:21.419298] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.934 [2024-07-20 15:43:21.489316] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:46.934 A filename is required. 00:07:46.934 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:46.934 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.934 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:46.934 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:46.934 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:46.934 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.934 00:07:46.934 real 0m0.463s 00:07:46.934 user 0m0.242s 00:07:46.934 sys 0m0.156s 00:07:46.934 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.934 15:43:21 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:46.934 ************************************ 00:07:46.934 END TEST accel_missing_filename 00:07:46.934 ************************************ 00:07:46.934 15:43:21 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:46.934 15:43:21 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:46.934 15:43:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.934 15:43:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.934 ************************************ 00:07:46.934 START TEST accel_compress_verify 00:07:46.934 ************************************ 00:07:46.934 15:43:21 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:46.934 15:43:21 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:46.934 15:43:21 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:46.934 15:43:21 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:46.934 15:43:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.934 15:43:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:46.934 15:43:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.934 15:43:21 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:46.934 15:43:21 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:46.934 15:43:21 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:46.934 15:43:21 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.934 15:43:21 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.934 15:43:21 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.934 15:43:21 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.934 15:43:21 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.934 15:43:21 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:46.934 15:43:21 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:46.934 [2024-07-20 15:43:21.703186] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:46.934 [2024-07-20 15:43:21.703336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76137 ] 00:07:47.194 [2024-07-20 15:43:21.854881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.194 [2024-07-20 15:43:21.898783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.194 [2024-07-20 15:43:21.943851] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.453 [2024-07-20 15:43:22.013480] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:47.453 00:07:47.453 Compression does not support the verify option, aborting. 00:07:47.453 15:43:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:47.453 15:43:22 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:47.453 15:43:22 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:47.453 15:43:22 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:47.453 15:43:22 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:47.453 15:43:22 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:47.453 00:07:47.454 real 0m0.456s 00:07:47.454 user 0m0.238s 00:07:47.454 sys 0m0.157s 00:07:47.454 15:43:22 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.454 15:43:22 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:47.454 ************************************ 00:07:47.454 END TEST accel_compress_verify 00:07:47.454 ************************************ 00:07:47.454 15:43:22 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:47.454 15:43:22 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:47.454 15:43:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.454 15:43:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.454 ************************************ 00:07:47.454 START TEST accel_wrong_workload 00:07:47.454 ************************************ 00:07:47.454 15:43:22 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:47.454 15:43:22 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:47.454 15:43:22 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:47.454 15:43:22 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:47.454 15:43:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.454 15:43:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:47.454 15:43:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.454 15:43:22 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:47.454 15:43:22 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:47.454 15:43:22 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:47.454 15:43:22 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.454 15:43:22 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.454 15:43:22 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.454 15:43:22 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.454 15:43:22 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.454 15:43:22 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:47.454 15:43:22 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:47.454 Unsupported workload type: foobar 00:07:47.454 [2024-07-20 15:43:22.223327] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:47.713 accel_perf options: 00:07:47.713 [-h help message] 00:07:47.713 [-q queue depth per core] 00:07:47.713 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:47.713 [-T number of threads per core 00:07:47.713 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:47.713 [-t time in seconds] 00:07:47.713 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:47.713 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:47.713 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:47.713 [-l for compress/decompress workloads, name of uncompressed input file 00:07:47.713 [-S for crc32c workload, use this seed value (default 0) 00:07:47.713 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:47.713 [-f for fill workload, use this BYTE value (default 255) 00:07:47.713 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:47.713 [-y verify result if this switch is on] 00:07:47.713 [-a tasks to allocate per core (default: same value as -q)] 00:07:47.713 Can be used to spread operations across a wider range of memory. 00:07:47.713 15:43:22 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:47.713 ************************************ 00:07:47.713 END TEST accel_wrong_workload 00:07:47.713 ************************************ 00:07:47.713 15:43:22 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:47.713 15:43:22 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:47.713 15:43:22 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:47.713 00:07:47.713 real 0m0.083s 00:07:47.713 user 0m0.082s 00:07:47.713 sys 0m0.044s 00:07:47.713 15:43:22 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.713 15:43:22 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:47.714 15:43:22 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:47.714 15:43:22 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:47.714 15:43:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.714 15:43:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.714 ************************************ 00:07:47.714 START TEST accel_negative_buffers 00:07:47.714 ************************************ 00:07:47.714 15:43:22 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:47.714 15:43:22 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:47.714 15:43:22 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:47.714 15:43:22 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:47.714 15:43:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.714 15:43:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:47.714 15:43:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.714 15:43:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:47.714 15:43:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:47.714 15:43:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:47.714 15:43:22 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.714 15:43:22 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.714 15:43:22 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.714 15:43:22 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.714 15:43:22 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.714 15:43:22 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:47.714 15:43:22 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:47.714 -x option must be non-negative. 00:07:47.714 [2024-07-20 15:43:22.367513] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:47.714 accel_perf options: 00:07:47.714 [-h help message] 00:07:47.714 [-q queue depth per core] 00:07:47.714 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:47.714 [-T number of threads per core 00:07:47.714 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:47.714 [-t time in seconds] 00:07:47.714 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:47.714 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:47.714 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:47.714 [-l for compress/decompress workloads, name of uncompressed input file 00:07:47.714 [-S for crc32c workload, use this seed value (default 0) 00:07:47.714 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:47.714 [-f for fill workload, use this BYTE value (default 255) 00:07:47.714 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:47.714 [-y verify result if this switch is on] 00:07:47.714 [-a tasks to allocate per core (default: same value as -q)] 00:07:47.714 Can be used to spread operations across a wider range of memory. 00:07:47.714 15:43:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:47.714 15:43:22 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:47.714 15:43:22 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:47.714 ************************************ 00:07:47.714 END TEST accel_negative_buffers 00:07:47.714 ************************************ 00:07:47.714 15:43:22 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:47.714 00:07:47.714 real 0m0.084s 00:07:47.714 user 0m0.079s 00:07:47.714 sys 0m0.046s 00:07:47.714 15:43:22 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.714 15:43:22 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:47.714 15:43:22 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:47.714 15:43:22 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:47.714 15:43:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.714 15:43:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.714 ************************************ 00:07:47.714 START TEST accel_crc32c 00:07:47.714 ************************************ 00:07:47.714 15:43:22 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:47.714 15:43:22 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:47.714 15:43:22 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:47.714 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.714 15:43:22 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:47.714 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.714 15:43:22 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:47.714 15:43:22 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:47.714 15:43:22 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.714 15:43:22 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.714 15:43:22 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.714 15:43:22 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.714 15:43:22 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.714 15:43:22 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:47.714 15:43:22 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:47.972 [2024-07-20 15:43:22.514811] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:47.972 [2024-07-20 15:43:22.514943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76193 ] 00:07:47.972 [2024-07-20 15:43:22.662715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.972 [2024-07-20 15:43:22.707524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.972 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:47.973 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.973 15:43:22 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:47.973 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.973 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.973 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:48.231 15:43:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.167 ************************************ 00:07:49.167 END TEST accel_crc32c 00:07:49.167 ************************************ 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:49.167 15:43:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.167 00:07:49.167 real 0m1.449s 00:07:49.167 user 0m1.221s 00:07:49.167 sys 0m0.143s 00:07:49.167 15:43:23 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.167 15:43:23 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:49.425 15:43:23 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:49.425 15:43:23 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:49.425 15:43:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.425 15:43:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.425 ************************************ 00:07:49.425 START TEST accel_crc32c_C2 00:07:49.425 ************************************ 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:49.425 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:49.425 [2024-07-20 15:43:24.033161] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:49.425 [2024-07-20 15:43:24.033286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76234 ] 00:07:49.425 [2024-07-20 15:43:24.172384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.425 [2024-07-20 15:43:24.217198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:49.683 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.684 15:43:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.618 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.618 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.618 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.618 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.618 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.618 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.618 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.618 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.618 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.618 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.618 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:50.876 ************************************ 00:07:50.876 END TEST accel_crc32c_C2 00:07:50.876 ************************************ 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.876 00:07:50.876 real 0m1.441s 00:07:50.876 user 0m1.215s 00:07:50.876 sys 0m0.142s 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.876 15:43:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:50.876 15:43:25 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:50.876 15:43:25 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:50.876 15:43:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.876 15:43:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.876 ************************************ 00:07:50.876 START TEST accel_copy 00:07:50.876 ************************************ 00:07:50.876 15:43:25 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:50.876 15:43:25 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:50.876 15:43:25 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:50.876 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:50.876 15:43:25 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:50.876 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:50.876 15:43:25 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:50.876 15:43:25 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:50.876 15:43:25 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.876 15:43:25 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.876 15:43:25 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.876 15:43:25 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.876 15:43:25 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.876 15:43:25 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:50.876 15:43:25 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:50.876 [2024-07-20 15:43:25.548907] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:50.876 [2024-07-20 15:43:25.549206] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76264 ] 00:07:51.156 [2024-07-20 15:43:25.697586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.156 [2024-07-20 15:43:25.741663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:51.156 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.157 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.157 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:51.157 15:43:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:51.157 15:43:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:51.157 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:51.157 15:43:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:52.592 ************************************ 00:07:52.592 END TEST accel_copy 00:07:52.592 ************************************ 00:07:52.592 15:43:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.592 00:07:52.592 real 0m1.454s 00:07:52.592 user 0m1.220s 00:07:52.592 sys 0m0.148s 00:07:52.592 15:43:26 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.592 15:43:26 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:52.592 15:43:27 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:52.592 15:43:27 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:52.592 15:43:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.592 15:43:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.592 ************************************ 00:07:52.592 START TEST accel_fill 00:07:52.592 ************************************ 00:07:52.592 15:43:27 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:52.592 [2024-07-20 15:43:27.072752] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:52.592 [2024-07-20 15:43:27.072880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76305 ] 00:07:52.592 [2024-07-20 15:43:27.224392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.592 [2024-07-20 15:43:27.266838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:52.592 15:43:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:53.968 15:43:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.968 00:07:53.968 real 0m1.452s 00:07:53.968 user 0m0.020s 00:07:53.968 sys 0m0.003s 00:07:53.968 15:43:28 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.968 15:43:28 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:53.968 ************************************ 00:07:53.968 END TEST accel_fill 00:07:53.968 ************************************ 00:07:53.968 15:43:28 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:53.968 15:43:28 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:53.968 15:43:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.968 15:43:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.968 ************************************ 00:07:53.968 START TEST accel_copy_crc32c 00:07:53.968 ************************************ 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:53.968 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:53.968 [2024-07-20 15:43:28.598441] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:53.968 [2024-07-20 15:43:28.598693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76335 ] 00:07:53.968 [2024-07-20 15:43:28.748250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.226 [2024-07-20 15:43:28.790434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.226 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.226 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.226 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.226 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.227 15:43:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.606 00:07:55.606 real 0m1.452s 00:07:55.606 user 0m0.023s 00:07:55.606 sys 0m0.003s 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.606 ************************************ 00:07:55.606 END TEST accel_copy_crc32c 00:07:55.606 ************************************ 00:07:55.606 15:43:29 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:55.606 15:43:30 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:55.606 15:43:30 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:55.606 15:43:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.606 15:43:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.606 ************************************ 00:07:55.606 START TEST accel_copy_crc32c_C2 00:07:55.606 ************************************ 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:55.606 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:55.607 [2024-07-20 15:43:30.115469] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:55.607 [2024-07-20 15:43:30.115760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76376 ] 00:07:55.607 [2024-07-20 15:43:30.266186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.607 [2024-07-20 15:43:30.309923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:55.607 15:43:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.983 00:07:56.983 real 0m1.448s 00:07:56.983 user 0m0.020s 00:07:56.983 sys 0m0.004s 00:07:56.983 15:43:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:56.983 ************************************ 00:07:56.984 END TEST accel_copy_crc32c_C2 00:07:56.984 ************************************ 00:07:56.984 15:43:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:56.984 15:43:31 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:56.984 15:43:31 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:56.984 15:43:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.984 15:43:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.984 ************************************ 00:07:56.984 START TEST accel_dualcast 00:07:56.984 ************************************ 00:07:56.984 15:43:31 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:56.984 15:43:31 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:56.984 15:43:31 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:56.984 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:56.984 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:56.984 15:43:31 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:56.984 15:43:31 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:56.984 15:43:31 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:56.984 15:43:31 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.984 15:43:31 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.984 15:43:31 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.984 15:43:31 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.984 15:43:31 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.984 15:43:31 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:56.984 15:43:31 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:56.984 [2024-07-20 15:43:31.638510] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:56.984 [2024-07-20 15:43:31.638634] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76406 ] 00:07:57.242 [2024-07-20 15:43:31.788399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.242 [2024-07-20 15:43:31.831059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:57.242 15:43:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:58.616 15:43:33 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.616 ************************************ 00:07:58.616 END TEST accel_dualcast 00:07:58.616 ************************************ 00:07:58.616 00:07:58.616 real 0m1.449s 00:07:58.616 user 0m1.218s 00:07:58.616 sys 0m0.146s 00:07:58.616 15:43:33 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.617 15:43:33 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:58.617 15:43:33 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:58.617 15:43:33 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:58.617 15:43:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.617 15:43:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.617 ************************************ 00:07:58.617 START TEST accel_compare 00:07:58.617 ************************************ 00:07:58.617 15:43:33 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:58.617 [2024-07-20 15:43:33.158073] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:58.617 [2024-07-20 15:43:33.158329] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76442 ] 00:07:58.617 [2024-07-20 15:43:33.309529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.617 [2024-07-20 15:43:33.351664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.617 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:58.874 15:43:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.807 15:43:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.807 15:43:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.807 15:43:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.807 15:43:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.807 15:43:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.807 15:43:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.807 15:43:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.807 15:43:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:59.808 15:43:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.808 00:07:59.808 real 0m1.448s 00:07:59.808 user 0m0.016s 00:07:59.808 sys 0m0.007s 00:07:59.808 15:43:34 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.808 ************************************ 00:07:59.808 END TEST accel_compare 00:07:59.808 ************************************ 00:07:59.808 15:43:34 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:00.066 15:43:34 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:00.066 15:43:34 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:00.066 15:43:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:00.066 15:43:34 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.066 ************************************ 00:08:00.066 START TEST accel_xor 00:08:00.066 ************************************ 00:08:00.066 15:43:34 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:08:00.066 15:43:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:00.066 15:43:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:00.066 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.066 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.066 15:43:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:00.066 15:43:34 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:00.066 15:43:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:00.066 15:43:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.066 15:43:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.067 15:43:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.067 15:43:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.067 15:43:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.067 15:43:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:00.067 15:43:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:00.067 [2024-07-20 15:43:34.678819] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:00.067 [2024-07-20 15:43:34.678969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76477 ] 00:08:00.067 [2024-07-20 15:43:34.829732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.325 [2024-07-20 15:43:34.875449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.325 15:43:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:00.326 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.703 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.704 00:08:01.704 real 0m1.455s 00:08:01.704 user 0m0.020s 00:08:01.704 sys 0m0.005s 00:08:01.704 15:43:36 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.704 ************************************ 00:08:01.704 END TEST accel_xor 00:08:01.704 ************************************ 00:08:01.704 15:43:36 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:01.704 15:43:36 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:01.704 15:43:36 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:01.704 15:43:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.704 15:43:36 accel -- common/autotest_common.sh@10 -- # set +x 00:08:01.704 ************************************ 00:08:01.704 START TEST accel_xor 00:08:01.704 ************************************ 00:08:01.704 15:43:36 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:01.704 [2024-07-20 15:43:36.205769] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:01.704 [2024-07-20 15:43:36.206043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76513 ] 00:08:01.704 [2024-07-20 15:43:36.352464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.704 [2024-07-20 15:43:36.397834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:01.704 15:43:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:03.082 15:43:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.082 00:08:03.082 real 0m1.454s 00:08:03.082 user 0m0.022s 00:08:03.082 sys 0m0.003s 00:08:03.082 15:43:37 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.082 ************************************ 00:08:03.082 END TEST accel_xor 00:08:03.082 ************************************ 00:08:03.082 15:43:37 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:03.082 15:43:37 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:03.082 15:43:37 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:03.082 15:43:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:03.082 15:43:37 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.082 ************************************ 00:08:03.082 START TEST accel_dif_verify 00:08:03.082 ************************************ 00:08:03.082 15:43:37 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:08:03.082 15:43:37 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:03.082 15:43:37 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:03.082 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.082 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.082 15:43:37 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:03.082 15:43:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:03.082 15:43:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:03.082 15:43:37 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.082 15:43:37 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.082 15:43:37 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.082 15:43:37 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.082 15:43:37 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.082 15:43:37 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:03.082 15:43:37 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:03.082 [2024-07-20 15:43:37.730868] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:03.082 [2024-07-20 15:43:37.730980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76548 ] 00:08:03.082 [2024-07-20 15:43:37.868404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.340 [2024-07-20 15:43:37.913049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:03.340 15:43:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:04.714 15:43:39 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.714 00:08:04.714 real 0m1.442s 00:08:04.714 user 0m0.021s 00:08:04.714 sys 0m0.005s 00:08:04.714 15:43:39 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.714 ************************************ 00:08:04.714 END TEST accel_dif_verify 00:08:04.714 ************************************ 00:08:04.714 15:43:39 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:04.714 15:43:39 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:04.714 15:43:39 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:04.714 15:43:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.714 15:43:39 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.714 ************************************ 00:08:04.714 START TEST accel_dif_generate 00:08:04.714 ************************************ 00:08:04.714 15:43:39 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:04.714 [2024-07-20 15:43:39.244260] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:04.714 [2024-07-20 15:43:39.244516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76584 ] 00:08:04.714 [2024-07-20 15:43:39.392897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.714 [2024-07-20 15:43:39.435758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.714 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:04.715 15:43:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:06.088 ************************************ 00:08:06.088 END TEST accel_dif_generate 00:08:06.088 ************************************ 00:08:06.088 15:43:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.088 00:08:06.088 real 0m1.454s 00:08:06.088 user 0m1.227s 00:08:06.088 sys 0m0.143s 00:08:06.088 15:43:40 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:06.088 15:43:40 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:06.088 15:43:40 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:06.088 15:43:40 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:06.088 15:43:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.088 15:43:40 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.088 ************************************ 00:08:06.088 START TEST accel_dif_generate_copy 00:08:06.088 ************************************ 00:08:06.088 15:43:40 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:08:06.088 15:43:40 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:06.088 15:43:40 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:06.089 15:43:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.089 15:43:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.089 15:43:40 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:06.089 15:43:40 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:06.089 15:43:40 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:06.089 15:43:40 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.089 15:43:40 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.089 15:43:40 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.089 15:43:40 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.089 15:43:40 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.089 15:43:40 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:06.089 15:43:40 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:06.089 [2024-07-20 15:43:40.763332] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:06.089 [2024-07-20 15:43:40.763492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76619 ] 00:08:06.373 [2024-07-20 15:43:40.926854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.373 [2024-07-20 15:43:40.969773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.373 15:43:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.783 00:08:07.783 real 0m1.466s 00:08:07.783 user 0m1.225s 00:08:07.783 sys 0m0.155s 00:08:07.783 ************************************ 00:08:07.783 END TEST accel_dif_generate_copy 00:08:07.783 ************************************ 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.783 15:43:42 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:07.783 15:43:42 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:07.783 15:43:42 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:07.783 15:43:42 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:07.783 15:43:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.783 15:43:42 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.783 ************************************ 00:08:07.783 START TEST accel_comp 00:08:07.783 ************************************ 00:08:07.783 15:43:42 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:07.784 [2024-07-20 15:43:42.292143] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:07.784 [2024-07-20 15:43:42.292421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76655 ] 00:08:07.784 [2024-07-20 15:43:42.443955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.784 [2024-07-20 15:43:42.489077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:07.784 15:43:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:09.161 15:43:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.161 00:08:09.161 real 0m1.460s 00:08:09.161 user 0m0.025s 00:08:09.161 sys 0m0.001s 00:08:09.161 ************************************ 00:08:09.161 END TEST accel_comp 00:08:09.161 15:43:43 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.161 15:43:43 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:09.161 ************************************ 00:08:09.161 15:43:43 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:09.161 15:43:43 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:09.161 15:43:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.161 15:43:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.161 ************************************ 00:08:09.161 START TEST accel_decomp 00:08:09.161 ************************************ 00:08:09.161 15:43:43 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:09.161 15:43:43 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:09.161 15:43:43 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:09.161 15:43:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.161 15:43:43 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:09.161 15:43:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.161 15:43:43 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:09.161 15:43:43 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:09.161 15:43:43 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.162 15:43:43 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.162 15:43:43 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.162 15:43:43 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.162 15:43:43 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.162 15:43:43 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:09.162 15:43:43 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:09.162 [2024-07-20 15:43:43.820034] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:09.162 [2024-07-20 15:43:43.820275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76685 ] 00:08:09.419 [2024-07-20 15:43:43.970438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.419 [2024-07-20 15:43:44.015117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.419 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:09.420 15:43:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:10.798 15:43:45 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.798 ************************************ 00:08:10.798 END TEST accel_decomp 00:08:10.798 ************************************ 00:08:10.798 00:08:10.798 real 0m1.459s 00:08:10.798 user 0m1.212s 00:08:10.798 sys 0m0.162s 00:08:10.798 15:43:45 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.798 15:43:45 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:10.798 15:43:45 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:10.798 15:43:45 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:10.798 15:43:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:10.798 15:43:45 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.798 ************************************ 00:08:10.798 START TEST accel_decmop_full 00:08:10.798 ************************************ 00:08:10.798 15:43:45 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:08:10.798 [2024-07-20 15:43:45.349894] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:10.798 [2024-07-20 15:43:45.350132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76726 ] 00:08:10.798 [2024-07-20 15:43:45.497911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.798 [2024-07-20 15:43:45.539333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:10.798 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.058 15:43:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.995 15:43:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:11.995 15:43:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.995 15:43:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.995 15:43:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.995 15:43:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:11.995 15:43:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.995 15:43:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.995 15:43:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.995 15:43:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:11.995 15:43:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.995 15:43:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.995 15:43:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.995 15:43:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:11.995 15:43:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.996 15:43:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.996 15:43:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.996 15:43:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:11.996 15:43:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.996 15:43:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.996 15:43:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.996 15:43:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:11.996 15:43:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.996 15:43:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.996 15:43:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.996 15:43:46 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.996 15:43:46 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:11.996 ************************************ 00:08:11.996 END TEST accel_decmop_full 00:08:11.996 ************************************ 00:08:11.996 15:43:46 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.996 00:08:11.996 real 0m1.459s 00:08:11.996 user 0m1.228s 00:08:11.996 sys 0m0.146s 00:08:11.996 15:43:46 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.996 15:43:46 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:08:12.255 15:43:46 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:12.255 15:43:46 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:12.255 15:43:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:12.255 15:43:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.255 ************************************ 00:08:12.255 START TEST accel_decomp_mcore 00:08:12.255 ************************************ 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:12.255 15:43:46 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:12.255 [2024-07-20 15:43:46.873322] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:12.255 [2024-07-20 15:43:46.873591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76756 ] 00:08:12.255 [2024-07-20 15:43:47.024603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.515 [2024-07-20 15:43:47.073665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.515 [2024-07-20 15:43:47.073843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.515 [2024-07-20 15:43:47.073932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.515 [2024-07-20 15:43:47.074020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:12.515 15:43:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.893 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.894 00:08:13.894 real 0m1.480s 00:08:13.894 user 0m0.023s 00:08:13.894 sys 0m0.006s 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.894 15:43:48 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:13.894 ************************************ 00:08:13.894 END TEST accel_decomp_mcore 00:08:13.894 ************************************ 00:08:13.894 15:43:48 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.894 15:43:48 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:13.894 15:43:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.894 15:43:48 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.894 ************************************ 00:08:13.894 START TEST accel_decomp_full_mcore 00:08:13.894 ************************************ 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:13.894 [2024-07-20 15:43:48.414091] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:13.894 [2024-07-20 15:43:48.414228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76800 ] 00:08:13.894 [2024-07-20 15:43:48.566067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.894 [2024-07-20 15:43:48.611348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.894 [2024-07-20 15:43:48.611522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.894 [2024-07-20 15:43:48.611674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.894 [2024-07-20 15:43:48.611548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.894 15:43:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.269 00:08:15.269 real 0m1.485s 00:08:15.269 user 0m0.021s 00:08:15.269 sys 0m0.007s 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.269 ************************************ 00:08:15.269 END TEST accel_decomp_full_mcore 00:08:15.269 ************************************ 00:08:15.269 15:43:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:15.269 15:43:49 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:15.269 15:43:49 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:15.269 15:43:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.269 15:43:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.269 ************************************ 00:08:15.269 START TEST accel_decomp_mthread 00:08:15.269 ************************************ 00:08:15.269 15:43:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:15.269 15:43:49 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:15.269 15:43:49 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:15.270 15:43:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.270 15:43:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.270 15:43:49 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:15.270 15:43:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:15.270 15:43:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:15.270 15:43:49 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.270 15:43:49 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.270 15:43:49 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.270 15:43:49 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.270 15:43:49 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.270 15:43:49 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:15.270 15:43:49 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:15.270 [2024-07-20 15:43:49.974492] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:15.270 [2024-07-20 15:43:49.974622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76833 ] 00:08:15.528 [2024-07-20 15:43:50.126089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.528 [2024-07-20 15:43:50.166588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.528 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.529 15:43:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.905 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.906 00:08:16.906 real 0m1.457s 00:08:16.906 user 0m1.231s 00:08:16.906 sys 0m0.139s 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.906 15:43:51 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:16.906 ************************************ 00:08:16.906 END TEST accel_decomp_mthread 00:08:16.906 ************************************ 00:08:16.906 15:43:51 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:16.906 15:43:51 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:16.906 15:43:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.906 15:43:51 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.906 ************************************ 00:08:16.906 START TEST accel_decomp_full_mthread 00:08:16.906 ************************************ 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:16.906 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:16.906 [2024-07-20 15:43:51.507059] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:16.906 [2024-07-20 15:43:51.507215] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76874 ] 00:08:16.906 [2024-07-20 15:43:51.657906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.906 [2024-07-20 15:43:51.699283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.165 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.166 15:43:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.543 00:08:18.543 real 0m1.489s 00:08:18.543 user 0m1.245s 00:08:18.543 sys 0m0.150s 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:18.543 15:43:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:18.543 ************************************ 00:08:18.543 END TEST accel_decomp_full_mthread 00:08:18.543 ************************************ 00:08:18.543 15:43:53 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:18.543 15:43:53 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:18.543 15:43:53 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:18.543 15:43:53 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:18.543 15:43:53 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.543 15:43:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.543 15:43:53 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.543 15:43:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.543 15:43:53 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.543 15:43:53 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.543 15:43:53 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.543 15:43:53 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:18.543 15:43:53 accel -- accel/accel.sh@41 -- # jq -r . 00:08:18.543 ************************************ 00:08:18.543 START TEST accel_dif_functional_tests 00:08:18.543 ************************************ 00:08:18.543 15:43:53 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:18.543 [2024-07-20 15:43:53.114136] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:18.543 [2024-07-20 15:43:53.114298] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76905 ] 00:08:18.543 [2024-07-20 15:43:53.267577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:18.543 [2024-07-20 15:43:53.309223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.543 [2024-07-20 15:43:53.309311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.543 [2024-07-20 15:43:53.309472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.803 00:08:18.803 00:08:18.803 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.803 http://cunit.sourceforge.net/ 00:08:18.803 00:08:18.803 00:08:18.803 Suite: accel_dif 00:08:18.803 Test: verify: DIF generated, GUARD check ...passed 00:08:18.803 Test: verify: DIF generated, APPTAG check ...passed 00:08:18.803 Test: verify: DIF generated, REFTAG check ...passed 00:08:18.803 Test: verify: DIF not generated, GUARD check ...passed 00:08:18.803 Test: verify: DIF not generated, APPTAG check ...[2024-07-20 15:43:53.379128] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:18.803 [2024-07-20 15:43:53.379209] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:18.803 passed 00:08:18.803 Test: verify: DIF not generated, REFTAG check ...[2024-07-20 15:43:53.379282] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:18.803 passed 00:08:18.803 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:18.803 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:08:18.803 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:18.803 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:18.803 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:18.803 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:08:18.803 Test: verify copy: DIF generated, GUARD check ...[2024-07-20 15:43:53.379632] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:18.803 [2024-07-20 15:43:53.379850] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:18.803 passed 00:08:18.803 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:18.803 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:18.803 Test: verify copy: DIF not generated, GUARD check ...passed 00:08:18.803 Test: verify copy: DIF not generated, APPTAG check ...passed 00:08:18.803 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-20 15:43:53.380117] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:18.803 [2024-07-20 15:43:53.380193] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:18.803 [2024-07-20 15:43:53.380272] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:18.803 passed 00:08:18.803 Test: generate copy: DIF generated, GUARD check ...passed 00:08:18.803 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:18.803 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:18.803 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:18.803 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:18.803 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:18.803 Test: generate copy: iovecs-len validate ...[2024-07-20 15:43:53.380673] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:18.803 passed 00:08:18.803 Test: generate copy: buffer alignment validate ...passed 00:08:18.803 00:08:18.803 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.803 suites 1 1 n/a 0 0 00:08:18.803 tests 26 26 26 0 0 00:08:18.803 asserts 115 115 115 0 n/a 00:08:18.803 00:08:18.803 Elapsed time = 0.005 seconds 00:08:18.803 00:08:18.803 real 0m0.575s 00:08:18.803 user 0m0.639s 00:08:18.803 sys 0m0.214s 00:08:18.803 15:43:53 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:19.063 15:43:53 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:19.063 ************************************ 00:08:19.063 END TEST accel_dif_functional_tests 00:08:19.063 ************************************ 00:08:19.063 00:08:19.063 real 0m34.123s 00:08:19.063 user 0m34.359s 00:08:19.063 sys 0m5.323s 00:08:19.063 15:43:53 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:19.063 15:43:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.063 ************************************ 00:08:19.063 END TEST accel 00:08:19.063 ************************************ 00:08:19.063 15:43:53 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:19.063 15:43:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:19.063 15:43:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:19.063 15:43:53 -- common/autotest_common.sh@10 -- # set +x 00:08:19.063 ************************************ 00:08:19.063 START TEST accel_rpc 00:08:19.063 ************************************ 00:08:19.063 15:43:53 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:19.063 * Looking for test storage... 00:08:19.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:19.322 15:43:53 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:19.322 15:43:53 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=76976 00:08:19.322 15:43:53 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:19.322 15:43:53 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 76976 00:08:19.322 15:43:53 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 76976 ']' 00:08:19.322 15:43:53 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.322 15:43:53 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:19.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.322 15:43:53 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.322 15:43:53 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:19.322 15:43:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.322 [2024-07-20 15:43:53.962364] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:19.322 [2024-07-20 15:43:53.962493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76976 ] 00:08:19.322 [2024-07-20 15:43:54.113042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.581 [2024-07-20 15:43:54.153515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.148 15:43:54 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:20.148 15:43:54 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:20.148 15:43:54 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:20.148 15:43:54 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:20.148 15:43:54 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:20.148 15:43:54 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:20.148 15:43:54 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:20.148 15:43:54 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:20.148 15:43:54 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:20.148 15:43:54 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.148 ************************************ 00:08:20.148 START TEST accel_assign_opcode 00:08:20.148 ************************************ 00:08:20.148 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:08:20.148 15:43:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:20.148 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.148 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:20.148 [2024-07-20 15:43:54.753292] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:20.148 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.148 15:43:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:20.148 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.148 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:20.148 [2024-07-20 15:43:54.765265] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:20.148 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.148 15:43:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:20.148 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.148 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:20.406 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.406 15:43:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:20.406 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.406 15:43:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:20.406 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:20.406 15:43:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:20.406 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.406 software 00:08:20.406 00:08:20.406 real 0m0.253s 00:08:20.406 user 0m0.048s 00:08:20.406 sys 0m0.017s 00:08:20.406 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:20.406 15:43:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:20.406 ************************************ 00:08:20.406 END TEST accel_assign_opcode 00:08:20.406 ************************************ 00:08:20.406 15:43:55 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 76976 00:08:20.406 15:43:55 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 76976 ']' 00:08:20.406 15:43:55 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 76976 00:08:20.406 15:43:55 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:08:20.406 15:43:55 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:20.406 15:43:55 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76976 00:08:20.406 15:43:55 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:20.406 killing process with pid 76976 00:08:20.406 15:43:55 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:20.406 15:43:55 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76976' 00:08:20.406 15:43:55 accel_rpc -- common/autotest_common.sh@965 -- # kill 76976 00:08:20.406 15:43:55 accel_rpc -- common/autotest_common.sh@970 -- # wait 76976 00:08:20.975 00:08:20.975 real 0m1.736s 00:08:20.975 user 0m1.595s 00:08:20.975 sys 0m0.563s 00:08:20.975 15:43:55 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:20.975 15:43:55 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.975 ************************************ 00:08:20.975 END TEST accel_rpc 00:08:20.975 ************************************ 00:08:20.975 15:43:55 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:20.975 15:43:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:20.975 15:43:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:20.975 15:43:55 -- common/autotest_common.sh@10 -- # set +x 00:08:20.975 ************************************ 00:08:20.975 START TEST app_cmdline 00:08:20.975 ************************************ 00:08:20.975 15:43:55 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:20.975 * Looking for test storage... 00:08:20.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:20.975 15:43:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:20.975 15:43:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=77070 00:08:20.975 15:43:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 77070 00:08:20.975 15:43:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:20.975 15:43:55 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 77070 ']' 00:08:20.975 15:43:55 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.975 15:43:55 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:20.975 15:43:55 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.975 15:43:55 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:20.975 15:43:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:20.975 [2024-07-20 15:43:55.769395] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:20.975 [2024-07-20 15:43:55.769711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77070 ] 00:08:21.235 [2024-07-20 15:43:55.921663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.235 [2024-07-20 15:43:55.962769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.802 15:43:56 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:21.802 15:43:56 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:08:21.802 15:43:56 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:22.060 { 00:08:22.060 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:08:22.060 "fields": { 00:08:22.060 "major": 24, 00:08:22.060 "minor": 5, 00:08:22.060 "patch": 1, 00:08:22.060 "suffix": "-pre", 00:08:22.060 "commit": "5fa2f5086" 00:08:22.060 } 00:08:22.060 } 00:08:22.060 15:43:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:22.060 15:43:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:22.060 15:43:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:22.060 15:43:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:22.060 15:43:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:22.060 15:43:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:22.060 15:43:56 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.060 15:43:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:22.060 15:43:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:22.060 15:43:56 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.060 15:43:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:22.060 15:43:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:22.060 15:43:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:22.060 15:43:56 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:22.060 15:43:56 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:22.060 15:43:56 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.060 15:43:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.060 15:43:56 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.060 15:43:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.060 15:43:56 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.060 15:43:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.060 15:43:56 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.060 15:43:56 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:22.060 15:43:56 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:22.320 request: 00:08:22.320 { 00:08:22.320 "method": "env_dpdk_get_mem_stats", 00:08:22.320 "req_id": 1 00:08:22.320 } 00:08:22.320 Got JSON-RPC error response 00:08:22.320 response: 00:08:22.320 { 00:08:22.320 "code": -32601, 00:08:22.320 "message": "Method not found" 00:08:22.320 } 00:08:22.320 15:43:56 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:22.320 15:43:56 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:22.320 15:43:56 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:22.320 15:43:56 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:22.320 15:43:56 app_cmdline -- app/cmdline.sh@1 -- # killprocess 77070 00:08:22.320 15:43:56 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 77070 ']' 00:08:22.320 15:43:56 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 77070 00:08:22.320 15:43:56 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:08:22.320 15:43:56 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:22.320 15:43:56 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77070 00:08:22.320 killing process with pid 77070 00:08:22.320 15:43:56 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:22.320 15:43:56 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:22.320 15:43:56 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77070' 00:08:22.320 15:43:56 app_cmdline -- common/autotest_common.sh@965 -- # kill 77070 00:08:22.320 15:43:56 app_cmdline -- common/autotest_common.sh@970 -- # wait 77070 00:08:22.579 00:08:22.579 real 0m1.796s 00:08:22.579 user 0m1.893s 00:08:22.579 sys 0m0.573s 00:08:22.579 15:43:57 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.579 15:43:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:22.579 ************************************ 00:08:22.579 END TEST app_cmdline 00:08:22.579 ************************************ 00:08:22.838 15:43:57 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:22.838 15:43:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:22.838 15:43:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.838 15:43:57 -- common/autotest_common.sh@10 -- # set +x 00:08:22.838 ************************************ 00:08:22.838 START TEST version 00:08:22.838 ************************************ 00:08:22.838 15:43:57 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:22.838 * Looking for test storage... 00:08:22.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:22.838 15:43:57 version -- app/version.sh@17 -- # get_header_version major 00:08:22.838 15:43:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:22.838 15:43:57 version -- app/version.sh@14 -- # cut -f2 00:08:22.838 15:43:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:22.838 15:43:57 version -- app/version.sh@17 -- # major=24 00:08:22.838 15:43:57 version -- app/version.sh@18 -- # get_header_version minor 00:08:22.838 15:43:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:22.838 15:43:57 version -- app/version.sh@14 -- # cut -f2 00:08:22.838 15:43:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:22.838 15:43:57 version -- app/version.sh@18 -- # minor=5 00:08:22.838 15:43:57 version -- app/version.sh@19 -- # get_header_version patch 00:08:22.838 15:43:57 version -- app/version.sh@14 -- # cut -f2 00:08:22.838 15:43:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:22.838 15:43:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:22.838 15:43:57 version -- app/version.sh@19 -- # patch=1 00:08:22.838 15:43:57 version -- app/version.sh@20 -- # get_header_version suffix 00:08:22.838 15:43:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:22.838 15:43:57 version -- app/version.sh@14 -- # cut -f2 00:08:22.838 15:43:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:22.838 15:43:57 version -- app/version.sh@20 -- # suffix=-pre 00:08:22.838 15:43:57 version -- app/version.sh@22 -- # version=24.5 00:08:22.838 15:43:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:22.838 15:43:57 version -- app/version.sh@25 -- # version=24.5.1 00:08:22.838 15:43:57 version -- app/version.sh@28 -- # version=24.5.1rc0 00:08:22.838 15:43:57 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:22.838 15:43:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:22.838 15:43:57 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:08:22.838 15:43:57 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:08:22.838 00:08:22.838 real 0m0.220s 00:08:22.838 user 0m0.118s 00:08:22.838 sys 0m0.152s 00:08:22.839 ************************************ 00:08:22.839 END TEST version 00:08:22.839 ************************************ 00:08:22.839 15:43:57 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.839 15:43:57 version -- common/autotest_common.sh@10 -- # set +x 00:08:23.169 15:43:57 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:23.169 15:43:57 -- spdk/autotest.sh@198 -- # uname -s 00:08:23.169 15:43:57 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:23.169 15:43:57 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:23.169 15:43:57 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:23.169 15:43:57 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:08:23.169 15:43:57 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:23.169 15:43:57 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:23.169 15:43:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:23.169 15:43:57 -- common/autotest_common.sh@10 -- # set +x 00:08:23.169 ************************************ 00:08:23.169 START TEST blockdev_nvme 00:08:23.169 ************************************ 00:08:23.169 15:43:57 blockdev_nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:23.169 * Looking for test storage... 00:08:23.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:23.169 15:43:57 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=77215 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:23.169 15:43:57 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 77215 00:08:23.169 15:43:57 blockdev_nvme -- common/autotest_common.sh@827 -- # '[' -z 77215 ']' 00:08:23.169 15:43:57 blockdev_nvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.169 15:43:57 blockdev_nvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:23.169 15:43:57 blockdev_nvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.169 15:43:57 blockdev_nvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:23.169 15:43:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.169 [2024-07-20 15:43:57.934967] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:23.169 [2024-07-20 15:43:57.935302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77215 ] 00:08:23.511 [2024-07-20 15:43:58.083838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.511 [2024-07-20 15:43:58.124861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.077 15:43:58 blockdev_nvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:24.077 15:43:58 blockdev_nvme -- common/autotest_common.sh@860 -- # return 0 00:08:24.077 15:43:58 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:24.077 15:43:58 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:08:24.077 15:43:58 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:24.077 15:43:58 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:24.077 15:43:58 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:24.077 15:43:58 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:24.077 15:43:58 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.077 15:43:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.348 15:43:59 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.348 15:43:59 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:24.348 15:43:59 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.348 15:43:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.348 15:43:59 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.348 15:43:59 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:08:24.348 15:43:59 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:24.348 15:43:59 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.348 15:43:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.348 15:43:59 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.348 15:43:59 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:24.348 15:43:59 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.348 15:43:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.609 15:43:59 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.609 15:43:59 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:24.609 15:43:59 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:24.609 15:43:59 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.609 15:43:59 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:24.609 15:43:59 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:24.609 15:43:59 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "29e32311-3eb9-4579-ac5f-9b9a1226494b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "29e32311-3eb9-4579-ac5f-9b9a1226494b",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "af458b76-2f96-4365-9120-740de21efca2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "af458b76-2f96-4365-9120-740de21efca2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2161c355-352d-465d-b0cf-8aa1dd132119"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2161c355-352d-465d-b0cf-8aa1dd132119",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "962de112-e7d2-4a22-b347-33e3f6bac252"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "962de112-e7d2-4a22-b347-33e3f6bac252",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "bd0586b2-e367-4808-8f3a-cf3105346aa2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bd0586b2-e367-4808-8f3a-cf3105346aa2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f8c9c1f3-0caf-406d-83e2-63b269cc2ac1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f8c9c1f3-0caf-406d-83e2-63b269cc2ac1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:24.609 15:43:59 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:24.609 15:43:59 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:08:24.609 15:43:59 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:24.609 15:43:59 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 77215 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@946 -- # '[' -z 77215 ']' 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@950 -- # kill -0 77215 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@951 -- # uname 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77215 00:08:24.609 killing process with pid 77215 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77215' 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@965 -- # kill 77215 00:08:24.609 15:43:59 blockdev_nvme -- common/autotest_common.sh@970 -- # wait 77215 00:08:25.175 15:43:59 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:25.175 15:43:59 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:25.175 15:43:59 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:25.175 15:43:59 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.175 15:43:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.175 ************************************ 00:08:25.175 START TEST bdev_hello_world 00:08:25.175 ************************************ 00:08:25.175 15:43:59 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:25.175 [2024-07-20 15:43:59.839858] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:25.175 [2024-07-20 15:43:59.839981] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77283 ] 00:08:25.433 [2024-07-20 15:43:59.988561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.433 [2024-07-20 15:44:00.032952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.692 [2024-07-20 15:44:00.403401] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:25.692 [2024-07-20 15:44:00.403464] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:25.692 [2024-07-20 15:44:00.403504] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:25.692 [2024-07-20 15:44:00.405716] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:25.692 [2024-07-20 15:44:00.406046] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:25.692 [2024-07-20 15:44:00.406070] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:25.692 [2024-07-20 15:44:00.406308] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:25.692 00:08:25.692 [2024-07-20 15:44:00.406344] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:25.950 00:08:25.950 real 0m0.875s 00:08:25.950 user 0m0.560s 00:08:25.950 sys 0m0.213s 00:08:25.950 15:44:00 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.950 ************************************ 00:08:25.950 END TEST bdev_hello_world 00:08:25.950 ************************************ 00:08:25.950 15:44:00 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:25.950 15:44:00 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:25.950 15:44:00 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:25.950 15:44:00 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.950 15:44:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.950 ************************************ 00:08:25.950 START TEST bdev_bounds 00:08:25.950 ************************************ 00:08:25.950 15:44:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:08:25.950 15:44:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=77314 00:08:25.950 15:44:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:25.950 15:44:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:25.950 Process bdevio pid: 77314 00:08:25.950 15:44:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 77314' 00:08:25.950 15:44:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 77314 00:08:25.950 15:44:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 77314 ']' 00:08:25.950 15:44:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.950 15:44:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:25.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.950 15:44:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.951 15:44:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:25.951 15:44:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:26.209 [2024-07-20 15:44:00.789514] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:26.209 [2024-07-20 15:44:00.789663] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77314 ] 00:08:26.209 [2024-07-20 15:44:00.941012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:26.209 [2024-07-20 15:44:00.984556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.209 [2024-07-20 15:44:00.984638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.209 [2024-07-20 15:44:00.984752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.147 15:44:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:27.147 15:44:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:08:27.147 15:44:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:27.147 I/O targets: 00:08:27.147 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:27.147 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:27.147 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:27.147 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:27.147 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:27.147 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:27.147 00:08:27.147 00:08:27.148 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.148 http://cunit.sourceforge.net/ 00:08:27.148 00:08:27.148 00:08:27.148 Suite: bdevio tests on: Nvme3n1 00:08:27.148 Test: blockdev write read block ...passed 00:08:27.148 Test: blockdev write zeroes read block ...passed 00:08:27.148 Test: blockdev write zeroes read no split ...passed 00:08:27.148 Test: blockdev write zeroes read split ...passed 00:08:27.148 Test: blockdev write zeroes read split partial ...passed 00:08:27.148 Test: blockdev reset ...[2024-07-20 15:44:01.683934] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:27.148 passed 00:08:27.148 Test: blockdev write read 8 blocks ...[2024-07-20 15:44:01.686023] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:27.148 passed 00:08:27.148 Test: blockdev write read size > 128k ...passed 00:08:27.148 Test: blockdev write read invalid size ...passed 00:08:27.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:27.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:27.148 Test: blockdev write read max offset ...passed 00:08:27.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:27.148 Test: blockdev writev readv 8 blocks ...passed 00:08:27.148 Test: blockdev writev readv 30 x 1block ...passed 00:08:27.148 Test: blockdev writev readv block ...passed 00:08:27.148 Test: blockdev writev readv size > 128k ...passed 00:08:27.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:27.148 Test: blockdev comparev and writev ...[2024-07-20 15:44:01.692797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a960e000 len:0x1000 00:08:27.148 [2024-07-20 15:44:01.692852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:27.148 passed 00:08:27.148 Test: blockdev nvme passthru rw ...passed 00:08:27.148 Test: blockdev nvme passthru vendor specific ...passed 00:08:27.148 Test: blockdev nvme admin passthru ...[2024-07-20 15:44:01.693785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:27.148 [2024-07-20 15:44:01.693830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:27.148 passed 00:08:27.148 Test: blockdev copy ...passed 00:08:27.148 Suite: bdevio tests on: Nvme2n3 00:08:27.148 Test: blockdev write read block ...passed 00:08:27.148 Test: blockdev write zeroes read block ...passed 00:08:27.148 Test: blockdev write zeroes read no split ...passed 00:08:27.148 Test: blockdev write zeroes read split ...passed 00:08:27.148 Test: blockdev write zeroes read split partial ...passed 00:08:27.148 Test: blockdev reset ...[2024-07-20 15:44:01.707505] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:27.148 passed 00:08:27.148 Test: blockdev write read 8 blocks ...[2024-07-20 15:44:01.709624] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:27.148 passed 00:08:27.148 Test: blockdev write read size > 128k ...passed 00:08:27.148 Test: blockdev write read invalid size ...passed 00:08:27.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:27.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:27.148 Test: blockdev write read max offset ...passed 00:08:27.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:27.148 Test: blockdev writev readv 8 blocks ...passed 00:08:27.148 Test: blockdev writev readv 30 x 1block ...passed 00:08:27.148 Test: blockdev writev readv block ...passed 00:08:27.148 Test: blockdev writev readv size > 128k ...passed 00:08:27.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:27.148 Test: blockdev comparev and writev ...[2024-07-20 15:44:01.716197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a960a000 len:0x1000 00:08:27.148 [2024-07-20 15:44:01.716242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:27.148 passed 00:08:27.148 Test: blockdev nvme passthru rw ...passed 00:08:27.148 Test: blockdev nvme passthru vendor specific ...passed 00:08:27.148 Test: blockdev nvme admin passthru ...[2024-07-20 15:44:01.717154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:27.148 [2024-07-20 15:44:01.717193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:27.148 passed 00:08:27.148 Test: blockdev copy ...passed 00:08:27.148 Suite: bdevio tests on: Nvme2n2 00:08:27.148 Test: blockdev write read block ...passed 00:08:27.148 Test: blockdev write zeroes read block ...passed 00:08:27.148 Test: blockdev write zeroes read no split ...passed 00:08:27.148 Test: blockdev write zeroes read split ...passed 00:08:27.148 Test: blockdev write zeroes read split partial ...passed 00:08:27.148 Test: blockdev reset ...[2024-07-20 15:44:01.733481] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:27.148 passed 00:08:27.148 Test: blockdev write read 8 blocks ...[2024-07-20 15:44:01.735547] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:27.148 passed 00:08:27.148 Test: blockdev write read size > 128k ...passed 00:08:27.148 Test: blockdev write read invalid size ...passed 00:08:27.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:27.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:27.148 Test: blockdev write read max offset ...passed 00:08:27.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:27.148 Test: blockdev writev readv 8 blocks ...passed 00:08:27.148 Test: blockdev writev readv 30 x 1block ...passed 00:08:27.148 Test: blockdev writev readv block ...passed 00:08:27.148 Test: blockdev writev readv size > 128k ...passed 00:08:27.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:27.148 Test: blockdev comparev and writev ...[2024-07-20 15:44:01.742687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a9606000 len:0x1000 00:08:27.148 [2024-07-20 15:44:01.742732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:27.148 passed 00:08:27.148 Test: blockdev nvme passthru rw ...passed 00:08:27.148 Test: blockdev nvme passthru vendor specific ...passed 00:08:27.148 Test: blockdev nvme admin passthru ...[2024-07-20 15:44:01.743550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:27.148 [2024-07-20 15:44:01.743587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:27.148 passed 00:08:27.148 Test: blockdev copy ...passed 00:08:27.148 Suite: bdevio tests on: Nvme2n1 00:08:27.148 Test: blockdev write read block ...passed 00:08:27.148 Test: blockdev write zeroes read block ...passed 00:08:27.148 Test: blockdev write zeroes read no split ...passed 00:08:27.148 Test: blockdev write zeroes read split ...passed 00:08:27.148 Test: blockdev write zeroes read split partial ...passed 00:08:27.148 Test: blockdev reset ...[2024-07-20 15:44:01.760032] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:27.148 passed 00:08:27.148 Test: blockdev write read 8 blocks ...[2024-07-20 15:44:01.762075] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:27.148 passed 00:08:27.148 Test: blockdev write read size > 128k ...passed 00:08:27.148 Test: blockdev write read invalid size ...passed 00:08:27.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:27.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:27.148 Test: blockdev write read max offset ...passed 00:08:27.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:27.148 Test: blockdev writev readv 8 blocks ...passed 00:08:27.148 Test: blockdev writev readv 30 x 1block ...passed 00:08:27.148 Test: blockdev writev readv block ...passed 00:08:27.148 Test: blockdev writev readv size > 128k ...passed 00:08:27.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:27.148 Test: blockdev comparev and writev ...[2024-07-20 15:44:01.768974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a9602000 len:0x1000 00:08:27.148 [2024-07-20 15:44:01.769022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:27.148 passed 00:08:27.148 Test: blockdev nvme passthru rw ...passed 00:08:27.148 Test: blockdev nvme passthru vendor specific ...[2024-07-20 15:44:01.769959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:27.148 [2024-07-20 15:44:01.769993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:27.148 passed 00:08:27.148 Test: blockdev nvme admin passthru ...passed 00:08:27.148 Test: blockdev copy ...passed 00:08:27.148 Suite: bdevio tests on: Nvme1n1 00:08:27.148 Test: blockdev write read block ...passed 00:08:27.148 Test: blockdev write zeroes read block ...passed 00:08:27.148 Test: blockdev write zeroes read no split ...passed 00:08:27.148 Test: blockdev write zeroes read split ...passed 00:08:27.148 Test: blockdev write zeroes read split partial ...passed 00:08:27.148 Test: blockdev reset ...[2024-07-20 15:44:01.786151] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:27.148 [2024-07-20 15:44:01.788052] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:27.148 passed 00:08:27.148 Test: blockdev write read 8 blocks ...passed 00:08:27.148 Test: blockdev write read size > 128k ...passed 00:08:27.148 Test: blockdev write read invalid size ...passed 00:08:27.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:27.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:27.148 Test: blockdev write read max offset ...passed 00:08:27.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:27.148 Test: blockdev writev readv 8 blocks ...passed 00:08:27.148 Test: blockdev writev readv 30 x 1block ...passed 00:08:27.148 Test: blockdev writev readv block ...passed 00:08:27.148 Test: blockdev writev readv size > 128k ...passed 00:08:27.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:27.148 Test: blockdev comparev and writev ...[2024-07-20 15:44:01.795936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b540e000 len:0x1000 00:08:27.148 [2024-07-20 15:44:01.795981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:27.148 passed 00:08:27.148 Test: blockdev nvme passthru rw ...passed 00:08:27.148 Test: blockdev nvme passthru vendor specific ...[2024-07-20 15:44:01.796855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:27.148 [2024-07-20 15:44:01.796882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:27.148 passed 00:08:27.148 Test: blockdev nvme admin passthru ...passed 00:08:27.148 Test: blockdev copy ...passed 00:08:27.149 Suite: bdevio tests on: Nvme0n1 00:08:27.149 Test: blockdev write read block ...passed 00:08:27.149 Test: blockdev write zeroes read block ...passed 00:08:27.149 Test: blockdev write zeroes read no split ...passed 00:08:27.149 Test: blockdev write zeroes read split ...passed 00:08:27.149 Test: blockdev write zeroes read split partial ...passed 00:08:27.149 Test: blockdev reset ...[2024-07-20 15:44:01.826043] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:27.149 [2024-07-20 15:44:01.828019] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:27.149 passed 00:08:27.149 Test: blockdev write read 8 blocks ...passed 00:08:27.149 Test: blockdev write read size > 128k ...passed 00:08:27.149 Test: blockdev write read invalid size ...passed 00:08:27.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:27.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:27.149 Test: blockdev write read max offset ...passed 00:08:27.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:27.149 Test: blockdev writev readv 8 blocks ...passed 00:08:27.149 Test: blockdev writev readv 30 x 1block ...passed 00:08:27.149 Test: blockdev writev readv block ...passed 00:08:27.149 Test: blockdev writev readv size > 128k ...passed 00:08:27.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:27.149 Test: blockdev comparev and writev ...passed 00:08:27.149 Test: blockdev nvme passthru rw ...[2024-07-20 15:44:01.835261] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:27.149 separate metadata which is not supported yet. 00:08:27.149 passed 00:08:27.149 Test: blockdev nvme passthru vendor specific ...[2024-07-20 15:44:01.835967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:27.149 [2024-07-20 15:44:01.836008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:27.149 passed 00:08:27.149 Test: blockdev nvme admin passthru ...passed 00:08:27.149 Test: blockdev copy ...passed 00:08:27.149 00:08:27.149 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.149 suites 6 6 n/a 0 0 00:08:27.149 tests 138 138 138 0 0 00:08:27.149 asserts 893 893 893 0 n/a 00:08:27.149 00:08:27.149 Elapsed time = 0.389 seconds 00:08:27.149 0 00:08:27.149 15:44:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 77314 00:08:27.149 15:44:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 77314 ']' 00:08:27.149 15:44:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 77314 00:08:27.149 15:44:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:08:27.149 15:44:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:27.149 15:44:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77314 00:08:27.149 15:44:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:27.149 15:44:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:27.149 15:44:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77314' 00:08:27.149 killing process with pid 77314 00:08:27.149 15:44:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 77314 00:08:27.149 15:44:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 77314 00:08:27.408 15:44:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:27.408 00:08:27.408 real 0m1.382s 00:08:27.408 user 0m3.295s 00:08:27.408 sys 0m0.353s 00:08:27.408 15:44:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.408 15:44:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:27.408 ************************************ 00:08:27.408 END TEST bdev_bounds 00:08:27.408 ************************************ 00:08:27.408 15:44:02 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:27.408 15:44:02 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:27.408 15:44:02 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.408 15:44:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.408 ************************************ 00:08:27.408 START TEST bdev_nbd 00:08:27.408 ************************************ 00:08:27.408 15:44:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=77362 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 77362 /var/tmp/spdk-nbd.sock 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 77362 ']' 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:27.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:27.409 15:44:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:27.668 [2024-07-20 15:44:02.263003] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:27.668 [2024-07-20 15:44:02.263299] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.668 [2024-07-20 15:44:02.413434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.668 [2024-07-20 15:44:02.454201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.604 1+0 records in 00:08:28.604 1+0 records out 00:08:28.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643067 s, 6.4 MB/s 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:28.604 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.862 1+0 records in 00:08:28.862 1+0 records out 00:08:28.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643549 s, 6.4 MB/s 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:28.862 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.120 1+0 records in 00:08:29.120 1+0 records out 00:08:29.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658713 s, 6.2 MB/s 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:29.120 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.379 1+0 records in 00:08:29.379 1+0 records out 00:08:29.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529737 s, 7.7 MB/s 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:29.379 15:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:29.379 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:29.379 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:29.379 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:29.379 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:08:29.379 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:29.379 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:29.379 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:29.379 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:08:29.379 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:29.379 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:29.379 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:29.379 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.379 1+0 records in 00:08:29.379 1+0 records out 00:08:29.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679133 s, 6.0 MB/s 00:08:29.379 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.638 1+0 records in 00:08:29.638 1+0 records out 00:08:29.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734682 s, 5.6 MB/s 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:29.638 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:29.897 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:29.897 { 00:08:29.897 "nbd_device": "/dev/nbd0", 00:08:29.897 "bdev_name": "Nvme0n1" 00:08:29.897 }, 00:08:29.897 { 00:08:29.897 "nbd_device": "/dev/nbd1", 00:08:29.897 "bdev_name": "Nvme1n1" 00:08:29.897 }, 00:08:29.897 { 00:08:29.897 "nbd_device": "/dev/nbd2", 00:08:29.897 "bdev_name": "Nvme2n1" 00:08:29.897 }, 00:08:29.897 { 00:08:29.897 "nbd_device": "/dev/nbd3", 00:08:29.897 "bdev_name": "Nvme2n2" 00:08:29.897 }, 00:08:29.897 { 00:08:29.897 "nbd_device": "/dev/nbd4", 00:08:29.897 "bdev_name": "Nvme2n3" 00:08:29.897 }, 00:08:29.897 { 00:08:29.897 "nbd_device": "/dev/nbd5", 00:08:29.897 "bdev_name": "Nvme3n1" 00:08:29.897 } 00:08:29.897 ]' 00:08:29.897 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:29.897 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:29.897 { 00:08:29.897 "nbd_device": "/dev/nbd0", 00:08:29.897 "bdev_name": "Nvme0n1" 00:08:29.897 }, 00:08:29.897 { 00:08:29.897 "nbd_device": "/dev/nbd1", 00:08:29.897 "bdev_name": "Nvme1n1" 00:08:29.897 }, 00:08:29.897 { 00:08:29.897 "nbd_device": "/dev/nbd2", 00:08:29.897 "bdev_name": "Nvme2n1" 00:08:29.897 }, 00:08:29.897 { 00:08:29.897 "nbd_device": "/dev/nbd3", 00:08:29.897 "bdev_name": "Nvme2n2" 00:08:29.897 }, 00:08:29.897 { 00:08:29.897 "nbd_device": "/dev/nbd4", 00:08:29.897 "bdev_name": "Nvme2n3" 00:08:29.897 }, 00:08:29.897 { 00:08:29.897 "nbd_device": "/dev/nbd5", 00:08:29.897 "bdev_name": "Nvme3n1" 00:08:29.897 } 00:08:29.897 ]' 00:08:29.897 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:29.897 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:29.897 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.897 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:29.897 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:29.897 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:29.897 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.897 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:30.156 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:30.156 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:30.156 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:30.156 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.156 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.156 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:30.156 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.156 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.156 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.156 15:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:30.415 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:30.415 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:30.415 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:30.415 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.415 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.415 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:30.415 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.415 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.415 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.415 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.674 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:30.934 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:30.934 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:30.934 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:30.934 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.934 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.934 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:30.934 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.934 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.934 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.934 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:31.193 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:31.193 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:31.193 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:31.193 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.193 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.193 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:31.193 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:31.193 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.193 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.193 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.193 15:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:31.452 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:31.453 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:31.453 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:31.453 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:31.453 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.453 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:31.453 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:31.453 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:31.453 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:31.453 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:31.453 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:31.453 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:31.453 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:31.711 /dev/nbd0 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:31.711 1+0 records in 00:08:31.711 1+0 records out 00:08:31.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595909 s, 6.9 MB/s 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:31.711 /dev/nbd1 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:31.711 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:31.711 1+0 records in 00:08:31.711 1+0 records out 00:08:31.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461591 s, 8.9 MB/s 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:31.969 /dev/nbd10 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:31.969 1+0 records in 00:08:31.969 1+0 records out 00:08:31.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000781592 s, 5.2 MB/s 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:31.969 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:32.227 /dev/nbd11 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:32.227 1+0 records in 00:08:32.227 1+0 records out 00:08:32.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636898 s, 6.4 MB/s 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:32.227 15:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:32.484 /dev/nbd12 00:08:32.484 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:32.484 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:32.484 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:08:32.484 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:32.484 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:32.484 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:32.484 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:08:32.484 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:32.484 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:32.484 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:32.485 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:32.485 1+0 records in 00:08:32.485 1+0 records out 00:08:32.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000841409 s, 4.9 MB/s 00:08:32.485 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:32.485 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:32.485 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:32.485 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:32.485 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:32.485 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:32.485 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:32.485 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:32.742 /dev/nbd13 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:32.742 1+0 records in 00:08:32.742 1+0 records out 00:08:32.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569674 s, 7.2 MB/s 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.742 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:32.999 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:32.999 { 00:08:32.999 "nbd_device": "/dev/nbd0", 00:08:32.999 "bdev_name": "Nvme0n1" 00:08:32.999 }, 00:08:32.999 { 00:08:32.999 "nbd_device": "/dev/nbd1", 00:08:32.999 "bdev_name": "Nvme1n1" 00:08:32.999 }, 00:08:32.999 { 00:08:32.999 "nbd_device": "/dev/nbd10", 00:08:32.999 "bdev_name": "Nvme2n1" 00:08:32.999 }, 00:08:32.999 { 00:08:32.999 "nbd_device": "/dev/nbd11", 00:08:32.999 "bdev_name": "Nvme2n2" 00:08:32.999 }, 00:08:32.999 { 00:08:32.999 "nbd_device": "/dev/nbd12", 00:08:32.999 "bdev_name": "Nvme2n3" 00:08:32.999 }, 00:08:32.999 { 00:08:32.999 "nbd_device": "/dev/nbd13", 00:08:32.999 "bdev_name": "Nvme3n1" 00:08:32.999 } 00:08:32.999 ]' 00:08:32.999 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:32.999 { 00:08:32.999 "nbd_device": "/dev/nbd0", 00:08:32.999 "bdev_name": "Nvme0n1" 00:08:32.999 }, 00:08:32.999 { 00:08:32.999 "nbd_device": "/dev/nbd1", 00:08:32.999 "bdev_name": "Nvme1n1" 00:08:32.999 }, 00:08:32.999 { 00:08:32.999 "nbd_device": "/dev/nbd10", 00:08:32.999 "bdev_name": "Nvme2n1" 00:08:32.999 }, 00:08:32.999 { 00:08:32.999 "nbd_device": "/dev/nbd11", 00:08:32.999 "bdev_name": "Nvme2n2" 00:08:32.999 }, 00:08:32.999 { 00:08:32.999 "nbd_device": "/dev/nbd12", 00:08:32.999 "bdev_name": "Nvme2n3" 00:08:32.999 }, 00:08:32.999 { 00:08:32.999 "nbd_device": "/dev/nbd13", 00:08:32.999 "bdev_name": "Nvme3n1" 00:08:32.999 } 00:08:32.999 ]' 00:08:32.999 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:32.999 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:32.999 /dev/nbd1 00:08:32.999 /dev/nbd10 00:08:32.999 /dev/nbd11 00:08:32.999 /dev/nbd12 00:08:32.999 /dev/nbd13' 00:08:32.999 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:32.999 /dev/nbd1 00:08:32.999 /dev/nbd10 00:08:32.999 /dev/nbd11 00:08:32.999 /dev/nbd12 00:08:32.999 /dev/nbd13' 00:08:32.999 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:32.999 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:32.999 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:32.999 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:32.999 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:33.000 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:33.000 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:33.000 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:33.000 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:33.000 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:33.000 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:33.000 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:33.000 256+0 records in 00:08:33.000 256+0 records out 00:08:33.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122948 s, 85.3 MB/s 00:08:33.000 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:33.000 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:33.000 256+0 records in 00:08:33.000 256+0 records out 00:08:33.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119161 s, 8.8 MB/s 00:08:33.000 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:33.000 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:33.257 256+0 records in 00:08:33.257 256+0 records out 00:08:33.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12577 s, 8.3 MB/s 00:08:33.257 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:33.257 15:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:33.257 256+0 records in 00:08:33.257 256+0 records out 00:08:33.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120334 s, 8.7 MB/s 00:08:33.257 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:33.257 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:33.514 256+0 records in 00:08:33.514 256+0 records out 00:08:33.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12917 s, 8.1 MB/s 00:08:33.514 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:33.514 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:33.514 256+0 records in 00:08:33.514 256+0 records out 00:08:33.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122257 s, 8.6 MB/s 00:08:33.514 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:33.514 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:33.772 256+0 records in 00:08:33.772 256+0 records out 00:08:33.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152987 s, 6.9 MB/s 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:33.772 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:34.205 15:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:34.463 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:34.463 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:34.463 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:34.463 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:34.463 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:34.463 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:34.463 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:34.463 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:34.463 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:34.463 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:34.722 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.980 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:35.238 15:44:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:35.497 malloc_lvol_verify 00:08:35.497 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:35.765 514c06ec-82fc-42ed-82f0-c24e52444fe8 00:08:35.765 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:35.765 0c93331f-3c33-4ab4-a8ef-aab2a7edb916 00:08:35.765 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:36.024 /dev/nbd0 00:08:36.024 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:36.024 mke2fs 1.46.5 (30-Dec-2021) 00:08:36.024 Discarding device blocks: 0/4096 done 00:08:36.024 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:36.024 00:08:36.024 Allocating group tables: 0/1 done 00:08:36.024 Writing inode tables: 0/1 done 00:08:36.024 Creating journal (1024 blocks): done 00:08:36.024 Writing superblocks and filesystem accounting information: 0/1 done 00:08:36.024 00:08:36.024 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:36.024 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:36.024 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.024 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:36.024 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:36.024 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:36.024 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.024 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 77362 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 77362 ']' 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 77362 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77362 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:36.283 killing process with pid 77362 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77362' 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 77362 00:08:36.283 15:44:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 77362 00:08:36.541 15:44:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:36.541 00:08:36.541 real 0m9.051s 00:08:36.541 user 0m11.798s 00:08:36.541 sys 0m4.156s 00:08:36.541 15:44:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:36.541 15:44:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:36.541 ************************************ 00:08:36.541 END TEST bdev_nbd 00:08:36.541 ************************************ 00:08:36.541 15:44:11 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:36.541 15:44:11 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:08:36.541 skipping fio tests on NVMe due to multi-ns failures. 00:08:36.541 15:44:11 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:36.541 15:44:11 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:36.541 15:44:11 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:36.541 15:44:11 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:36.541 15:44:11 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:36.541 15:44:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.541 ************************************ 00:08:36.541 START TEST bdev_verify 00:08:36.541 ************************************ 00:08:36.541 15:44:11 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:36.800 [2024-07-20 15:44:11.380757] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:36.800 [2024-07-20 15:44:11.380902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77726 ] 00:08:36.800 [2024-07-20 15:44:11.531002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:36.800 [2024-07-20 15:44:11.573872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.800 [2024-07-20 15:44:11.573961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.367 Running I/O for 5 seconds... 00:08:42.630 00:08:42.630 Latency(us) 00:08:42.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.630 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:42.630 Verification LBA range: start 0x0 length 0xbd0bd 00:08:42.630 Nvme0n1 : 5.04 1855.39 7.25 0.00 0.00 68798.16 14633.74 79590.71 00:08:42.630 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:42.630 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:42.630 Nvme0n1 : 5.04 1879.04 7.34 0.00 0.00 67976.21 12844.00 66957.26 00:08:42.630 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:42.630 Verification LBA range: start 0x0 length 0xa0000 00:08:42.630 Nvme1n1 : 5.04 1854.83 7.25 0.00 0.00 68683.97 17265.71 72010.64 00:08:42.630 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:42.630 Verification LBA range: start 0xa0000 length 0xa0000 00:08:42.630 Nvme1n1 : 5.04 1878.56 7.34 0.00 0.00 67913.77 12475.53 65272.80 00:08:42.630 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:42.630 Verification LBA range: start 0x0 length 0x80000 00:08:42.630 Nvme2n1 : 5.05 1862.28 7.27 0.00 0.00 68302.98 6053.53 75800.67 00:08:42.630 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:42.630 Verification LBA range: start 0x80000 length 0x80000 00:08:42.630 Nvme2n1 : 5.04 1878.08 7.34 0.00 0.00 67807.58 12633.45 61903.88 00:08:42.630 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:42.630 Verification LBA range: start 0x0 length 0x80000 00:08:42.630 Nvme2n2 : 5.05 1861.76 7.27 0.00 0.00 68200.04 6185.12 77906.25 00:08:42.630 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:42.630 Verification LBA range: start 0x80000 length 0x80000 00:08:42.630 Nvme2n2 : 5.04 1877.60 7.33 0.00 0.00 67726.19 12475.53 63167.23 00:08:42.630 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:42.630 Verification LBA range: start 0x0 length 0x80000 00:08:42.630 Nvme2n3 : 5.06 1871.68 7.31 0.00 0.00 67807.54 4237.47 80011.82 00:08:42.630 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:42.630 Verification LBA range: start 0x80000 length 0x80000 00:08:42.630 Nvme2n3 : 5.05 1877.10 7.33 0.00 0.00 67647.06 12054.41 66115.03 00:08:42.630 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:42.630 Verification LBA range: start 0x0 length 0x20000 00:08:42.630 Nvme3n1 : 5.06 1871.28 7.31 0.00 0.00 67728.78 4316.43 80854.05 00:08:42.630 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:42.630 Verification LBA range: start 0x20000 length 0x20000 00:08:42.630 Nvme3n1 : 5.05 1887.13 7.37 0.00 0.00 67216.86 3368.92 66957.26 00:08:42.630 =================================================================================================================== 00:08:42.630 Total : 22454.72 87.71 0.00 0.00 67981.65 3368.92 80854.05 00:08:42.888 00:08:42.888 real 0m6.257s 00:08:42.888 user 0m11.706s 00:08:42.888 sys 0m0.254s 00:08:42.888 15:44:17 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:42.888 15:44:17 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:42.888 ************************************ 00:08:42.888 END TEST bdev_verify 00:08:42.888 ************************************ 00:08:42.888 15:44:17 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:42.888 15:44:17 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:42.888 15:44:17 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:42.888 15:44:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.888 ************************************ 00:08:42.888 START TEST bdev_verify_big_io 00:08:42.888 ************************************ 00:08:42.888 15:44:17 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:43.146 [2024-07-20 15:44:17.704414] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:43.146 [2024-07-20 15:44:17.704536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77813 ] 00:08:43.146 [2024-07-20 15:44:17.854941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:43.146 [2024-07-20 15:44:17.895874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.146 [2024-07-20 15:44:17.895899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.713 Running I/O for 5 seconds... 00:08:50.321 00:08:50.321 Latency(us) 00:08:50.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.321 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:50.321 Verification LBA range: start 0x0 length 0xbd0b 00:08:50.321 Nvme0n1 : 5.53 173.70 10.86 0.00 0.00 710598.92 26846.07 727686.48 00:08:50.321 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:50.321 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:50.321 Nvme0n1 : 5.32 168.43 10.53 0.00 0.00 738171.40 18318.50 754637.83 00:08:50.321 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:50.321 Verification LBA range: start 0x0 length 0xa000 00:08:50.322 Nvme1n1 : 5.60 179.71 11.23 0.00 0.00 682734.77 43164.27 613143.24 00:08:50.322 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:50.322 Verification LBA range: start 0xa000 length 0xa000 00:08:50.322 Nvme1n1 : 5.53 173.50 10.84 0.00 0.00 699629.92 79590.71 616512.15 00:08:50.322 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:50.322 Verification LBA range: start 0x0 length 0x8000 00:08:50.322 Nvme2n1 : 5.57 176.53 11.03 0.00 0.00 678850.53 43585.39 754637.83 00:08:50.322 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:50.322 Verification LBA range: start 0x8000 length 0x8000 00:08:50.322 Nvme2n1 : 5.62 178.97 11.19 0.00 0.00 667490.49 57692.74 603036.48 00:08:50.322 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:50.322 Verification LBA range: start 0x0 length 0x8000 00:08:50.322 Nvme2n2 : 5.63 182.66 11.42 0.00 0.00 641774.44 23582.43 754637.83 00:08:50.322 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:50.322 Verification LBA range: start 0x8000 length 0x8000 00:08:50.322 Nvme2n2 : 5.64 179.39 11.21 0.00 0.00 650369.14 28425.25 923083.77 00:08:50.322 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:50.322 Verification LBA range: start 0x0 length 0x8000 00:08:50.322 Nvme2n3 : 5.63 186.16 11.64 0.00 0.00 616953.96 28004.14 751268.91 00:08:50.322 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:50.322 Verification LBA range: start 0x8000 length 0x8000 00:08:50.322 Nvme2n3 : 5.66 178.32 11.14 0.00 0.00 640170.28 18002.66 1327354.04 00:08:50.322 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:50.322 Verification LBA range: start 0x0 length 0x2000 00:08:50.322 Nvme3n1 : 5.66 204.04 12.75 0.00 0.00 552047.93 815.91 771482.42 00:08:50.322 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:50.322 Verification LBA range: start 0x2000 length 0x2000 00:08:50.322 Nvme3n1 : 5.67 194.32 12.14 0.00 0.00 576525.46 2118.73 1354305.39 00:08:50.322 =================================================================================================================== 00:08:50.322 Total : 2175.74 135.98 0.00 0.00 651403.53 815.91 1354305.39 00:08:50.322 00:08:50.322 real 0m7.107s 00:08:50.322 user 0m13.366s 00:08:50.322 sys 0m0.293s 00:08:50.322 15:44:24 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:50.322 ************************************ 00:08:50.322 END TEST bdev_verify_big_io 00:08:50.322 ************************************ 00:08:50.322 15:44:24 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:50.322 15:44:24 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:50.322 15:44:24 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:50.322 15:44:24 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:50.322 15:44:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.322 ************************************ 00:08:50.322 START TEST bdev_write_zeroes 00:08:50.322 ************************************ 00:08:50.322 15:44:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:50.322 [2024-07-20 15:44:24.903234] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:50.322 [2024-07-20 15:44:24.903399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77911 ] 00:08:50.322 [2024-07-20 15:44:25.057134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.322 [2024-07-20 15:44:25.096925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.887 Running I/O for 1 seconds... 00:08:51.819 00:08:51.819 Latency(us) 00:08:51.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.819 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:51.819 Nvme0n1 : 1.01 13659.40 53.36 0.00 0.00 9348.15 7053.67 22213.81 00:08:51.819 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:51.819 Nvme1n1 : 1.01 13645.58 53.30 0.00 0.00 9347.07 7316.87 22003.25 00:08:51.819 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:51.819 Nvme2n1 : 1.01 13632.03 53.25 0.00 0.00 9334.58 7527.43 21792.69 00:08:51.819 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:51.819 Nvme2n2 : 1.02 13665.54 53.38 0.00 0.00 9283.68 5921.93 20950.46 00:08:51.819 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:51.819 Nvme2n3 : 1.02 13652.02 53.33 0.00 0.00 9275.12 6264.08 21055.74 00:08:51.819 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:51.819 Nvme3n1 : 1.02 13682.71 53.45 0.00 0.00 9229.82 3763.71 19476.56 00:08:51.819 =================================================================================================================== 00:08:51.819 Total : 81937.30 320.07 0.00 0.00 9302.92 3763.71 22213.81 00:08:52.079 00:08:52.079 real 0m1.930s 00:08:52.079 user 0m1.602s 00:08:52.079 sys 0m0.220s 00:08:52.079 15:44:26 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:52.079 15:44:26 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:52.079 ************************************ 00:08:52.079 END TEST bdev_write_zeroes 00:08:52.079 ************************************ 00:08:52.079 15:44:26 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:52.079 15:44:26 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:52.079 15:44:26 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:52.079 15:44:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.079 ************************************ 00:08:52.079 START TEST bdev_json_nonenclosed 00:08:52.079 ************************************ 00:08:52.079 15:44:26 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:52.339 [2024-07-20 15:44:26.894521] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:52.339 [2024-07-20 15:44:26.894678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77953 ] 00:08:52.339 [2024-07-20 15:44:27.045450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.339 [2024-07-20 15:44:27.085808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.339 [2024-07-20 15:44:27.085920] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:52.339 [2024-07-20 15:44:27.085967] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:52.339 [2024-07-20 15:44:27.085984] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:52.599 00:08:52.599 real 0m0.378s 00:08:52.599 user 0m0.154s 00:08:52.599 sys 0m0.121s 00:08:52.599 ************************************ 00:08:52.599 END TEST bdev_json_nonenclosed 00:08:52.599 ************************************ 00:08:52.599 15:44:27 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:52.599 15:44:27 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:52.599 15:44:27 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:52.599 15:44:27 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:52.599 15:44:27 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:52.599 15:44:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.599 ************************************ 00:08:52.599 START TEST bdev_json_nonarray 00:08:52.599 ************************************ 00:08:52.599 15:44:27 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:52.599 [2024-07-20 15:44:27.325111] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:52.599 [2024-07-20 15:44:27.325412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77973 ] 00:08:52.869 [2024-07-20 15:44:27.473263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.869 [2024-07-20 15:44:27.513467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.869 [2024-07-20 15:44:27.513583] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:52.869 [2024-07-20 15:44:27.513619] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:52.869 [2024-07-20 15:44:27.513636] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:52.869 ************************************ 00:08:52.869 END TEST bdev_json_nonarray 00:08:52.869 ************************************ 00:08:52.869 00:08:52.869 real 0m0.361s 00:08:52.869 user 0m0.151s 00:08:52.869 sys 0m0.105s 00:08:52.869 15:44:27 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:52.869 15:44:27 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:53.132 15:44:27 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:08:53.132 15:44:27 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:08:53.132 15:44:27 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:08:53.132 15:44:27 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:08:53.132 15:44:27 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:08:53.132 15:44:27 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:53.132 15:44:27 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:53.132 15:44:27 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:53.132 15:44:27 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:53.132 15:44:27 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:53.132 15:44:27 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:53.132 00:08:53.132 real 0m29.994s 00:08:53.132 user 0m44.850s 00:08:53.132 sys 0m6.770s 00:08:53.132 ************************************ 00:08:53.132 END TEST blockdev_nvme 00:08:53.132 ************************************ 00:08:53.132 15:44:27 blockdev_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:53.132 15:44:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.132 15:44:27 -- spdk/autotest.sh@213 -- # uname -s 00:08:53.132 15:44:27 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:08:53.132 15:44:27 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:53.132 15:44:27 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:53.132 15:44:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:53.132 15:44:27 -- common/autotest_common.sh@10 -- # set +x 00:08:53.132 ************************************ 00:08:53.132 START TEST blockdev_nvme_gpt 00:08:53.132 ************************************ 00:08:53.132 15:44:27 blockdev_nvme_gpt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:53.132 * Looking for test storage... 00:08:53.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:53.132 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:53.132 15:44:27 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:53.132 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:53.132 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=78049 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 78049 00:08:53.133 15:44:27 blockdev_nvme_gpt -- common/autotest_common.sh@827 -- # '[' -z 78049 ']' 00:08:53.133 15:44:27 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:53.133 15:44:27 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.133 15:44:27 blockdev_nvme_gpt -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:53.133 15:44:27 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.133 15:44:27 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:53.133 15:44:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:53.391 [2024-07-20 15:44:28.008046] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:53.391 [2024-07-20 15:44:28.008178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78049 ] 00:08:53.391 [2024-07-20 15:44:28.160258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.649 [2024-07-20 15:44:28.201245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.215 15:44:28 blockdev_nvme_gpt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:54.215 15:44:28 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # return 0 00:08:54.215 15:44:28 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:54.215 15:44:28 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:08:54.215 15:44:28 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:54.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.731 Waiting for block devices as requested 00:08:54.731 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.989 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.989 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:55.247 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.508 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:00.508 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1666 -- # local nvme bdf 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:09:00.508 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:09:00.509 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:00.509 15:44:34 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:09:00.509 BYT; 00:09:00.509 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:09:00.509 BYT; 00:09:00.509 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:09:00.509 15:44:34 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:09:00.509 15:44:34 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:00.509 15:44:34 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:00.509 15:44:34 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:09:00.509 15:44:34 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:09:00.509 15:44:34 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:00.509 15:44:34 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:00.509 15:44:34 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:00.509 15:44:34 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:00.509 15:44:34 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:09:00.509 15:44:35 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:09:00.509 15:44:35 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:00.509 15:44:35 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:00.509 15:44:35 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:09:00.509 15:44:35 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:09:00.509 15:44:35 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:00.509 15:44:35 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:00.509 15:44:35 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:00.509 15:44:35 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:00.509 15:44:35 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:00.509 15:44:35 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:09:01.444 The operation has completed successfully. 00:09:01.444 15:44:36 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:09:02.380 The operation has completed successfully. 00:09:02.380 15:44:37 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:02.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:03.883 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.883 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.883 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.883 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.883 15:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:09:03.883 15:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.883 15:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.883 [] 00:09:03.883 15:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.883 15:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:09:03.883 15:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:03.883 15:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:03.883 15:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:04.141 15:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:04.141 15:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.141 15:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:04.399 15:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.399 15:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:09:04.399 15:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.399 15:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:04.399 15:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.399 15:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:09:04.399 15:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:09:04.399 15:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.399 15:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:04.399 15:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.399 15:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:09:04.399 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.399 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:04.399 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.399 15:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:04.399 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.399 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:04.399 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.399 15:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:09:04.399 15:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:09:04.399 15:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:09:04.399 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.399 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:04.399 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.399 15:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:09:04.399 15:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:09:04.659 15:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "a0a002c1-eb7c-4dda-a958-c68af292abbc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a0a002c1-eb7c-4dda-a958-c68af292abbc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f95792b2-0082-4139-9918-d8bde28818ba"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f95792b2-0082-4139-9918-d8bde28818ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "3f330029-ec75-461d-992e-ca2a04b01592"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3f330029-ec75-461d-992e-ca2a04b01592",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3246bf71-f3b7-4fb4-864d-8f84ba08d293"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3246bf71-f3b7-4fb4-864d-8f84ba08d293",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0b6ab7dd-0924-4fdd-aaed-733b7a8c4a28"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0b6ab7dd-0924-4fdd-aaed-733b7a8c4a28",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:04.659 15:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:09:04.659 15:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:09:04.659 15:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:09:04.659 15:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 78049 00:09:04.659 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@946 -- # '[' -z 78049 ']' 00:09:04.659 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # kill -0 78049 00:09:04.659 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # uname 00:09:04.659 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:04.659 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78049 00:09:04.659 killing process with pid 78049 00:09:04.659 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:04.659 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:04.659 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78049' 00:09:04.659 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@965 -- # kill 78049 00:09:04.659 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # wait 78049 00:09:04.916 15:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:04.916 15:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:04.916 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:09:04.916 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:04.916 15:44:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:04.916 ************************************ 00:09:04.916 START TEST bdev_hello_world 00:09:04.916 ************************************ 00:09:04.916 15:44:39 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:05.174 [2024-07-20 15:44:39.747326] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:05.174 [2024-07-20 15:44:39.747487] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78661 ] 00:09:05.174 [2024-07-20 15:44:39.896016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.174 [2024-07-20 15:44:39.937452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.741 [2024-07-20 15:44:40.307669] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:05.741 [2024-07-20 15:44:40.307715] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:05.741 [2024-07-20 15:44:40.307736] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:05.741 [2024-07-20 15:44:40.310000] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:05.741 [2024-07-20 15:44:40.310679] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:05.741 [2024-07-20 15:44:40.310728] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:05.741 [2024-07-20 15:44:40.310979] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:05.741 00:09:05.741 [2024-07-20 15:44:40.311007] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:06.000 00:09:06.000 real 0m0.869s 00:09:06.000 user 0m0.556s 00:09:06.000 sys 0m0.210s 00:09:06.000 15:44:40 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:06.000 15:44:40 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:06.000 ************************************ 00:09:06.000 END TEST bdev_hello_world 00:09:06.000 ************************************ 00:09:06.000 15:44:40 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:09:06.000 15:44:40 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:06.000 15:44:40 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:06.000 15:44:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:06.000 ************************************ 00:09:06.000 START TEST bdev_bounds 00:09:06.000 ************************************ 00:09:06.000 15:44:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:09:06.000 15:44:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=78692 00:09:06.000 15:44:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:06.000 15:44:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:06.000 Process bdevio pid: 78692 00:09:06.000 15:44:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 78692' 00:09:06.000 15:44:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 78692 00:09:06.000 15:44:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 78692 ']' 00:09:06.000 15:44:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.000 15:44:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:06.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.000 15:44:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.000 15:44:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:06.000 15:44:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:06.000 [2024-07-20 15:44:40.685447] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:06.000 [2024-07-20 15:44:40.685595] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78692 ] 00:09:06.259 [2024-07-20 15:44:40.837879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:06.259 [2024-07-20 15:44:40.880932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.259 [2024-07-20 15:44:40.881020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.259 [2024-07-20 15:44:40.881149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.827 15:44:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:06.827 15:44:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:09:06.827 15:44:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:06.827 I/O targets: 00:09:06.827 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:06.827 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:06.827 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:06.827 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:06.827 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:06.827 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:06.827 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:06.827 00:09:06.827 00:09:06.827 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.827 http://cunit.sourceforge.net/ 00:09:06.827 00:09:06.827 00:09:06.827 Suite: bdevio tests on: Nvme3n1 00:09:06.827 Test: blockdev write read block ...passed 00:09:06.827 Test: blockdev write zeroes read block ...passed 00:09:06.827 Test: blockdev write zeroes read no split ...passed 00:09:06.827 Test: blockdev write zeroes read split ...passed 00:09:06.827 Test: blockdev write zeroes read split partial ...passed 00:09:06.827 Test: blockdev reset ...[2024-07-20 15:44:41.550793] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:06.827 passed 00:09:06.827 Test: blockdev write read 8 blocks ...[2024-07-20 15:44:41.552840] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.827 passed 00:09:06.827 Test: blockdev write read size > 128k ...passed 00:09:06.827 Test: blockdev write read invalid size ...passed 00:09:06.827 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.827 Test: blockdev write read max offset ...passed 00:09:06.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.827 Test: blockdev writev readv 8 blocks ...passed 00:09:06.827 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.827 Test: blockdev writev readv block ...passed 00:09:06.827 Test: blockdev writev readv size > 128k ...passed 00:09:06.827 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.827 Test: blockdev comparev and writev ...[2024-07-20 15:44:41.559485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9204000 len:0x1000 00:09:06.827 [2024-07-20 15:44:41.559544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.827 passed 00:09:06.827 Test: blockdev nvme passthru rw ...passed 00:09:06.827 Test: blockdev nvme passthru vendor specific ...passed 00:09:06.827 Test: blockdev nvme admin passthru ...[2024-07-20 15:44:41.560375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.827 [2024-07-20 15:44:41.560408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.827 passed 00:09:06.827 Test: blockdev copy ...passed 00:09:06.827 Suite: bdevio tests on: Nvme2n3 00:09:06.827 Test: blockdev write read block ...passed 00:09:06.827 Test: blockdev write zeroes read block ...passed 00:09:06.827 Test: blockdev write zeroes read no split ...passed 00:09:06.827 Test: blockdev write zeroes read split ...passed 00:09:06.827 Test: blockdev write zeroes read split partial ...passed 00:09:06.827 Test: blockdev reset ...[2024-07-20 15:44:41.573940] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:06.827 [2024-07-20 15:44:41.576026] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.827 passed 00:09:06.827 Test: blockdev write read 8 blocks ...passed 00:09:06.827 Test: blockdev write read size > 128k ...passed 00:09:06.827 Test: blockdev write read invalid size ...passed 00:09:06.827 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.827 Test: blockdev write read max offset ...passed 00:09:06.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.827 Test: blockdev writev readv 8 blocks ...passed 00:09:06.827 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.827 Test: blockdev writev readv block ...passed 00:09:06.827 Test: blockdev writev readv size > 128k ...passed 00:09:06.827 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.827 Test: blockdev comparev and writev ...[2024-07-20 15:44:41.582657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9204000 len:0x1000 00:09:06.827 [2024-07-20 15:44:41.582703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.827 passed 00:09:06.827 Test: blockdev nvme passthru rw ...passed 00:09:06.827 Test: blockdev nvme passthru vendor specific ...passed 00:09:06.827 Test: blockdev nvme admin passthru ...[2024-07-20 15:44:41.583616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.827 [2024-07-20 15:44:41.583649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.827 passed 00:09:06.827 Test: blockdev copy ...passed 00:09:06.827 Suite: bdevio tests on: Nvme2n2 00:09:06.827 Test: blockdev write read block ...passed 00:09:06.827 Test: blockdev write zeroes read block ...passed 00:09:06.827 Test: blockdev write zeroes read no split ...passed 00:09:06.827 Test: blockdev write zeroes read split ...passed 00:09:06.827 Test: blockdev write zeroes read split partial ...passed 00:09:06.827 Test: blockdev reset ...[2024-07-20 15:44:41.600630] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:06.827 [2024-07-20 15:44:41.602607] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.827 passed 00:09:06.827 Test: blockdev write read 8 blocks ...passed 00:09:06.827 Test: blockdev write read size > 128k ...passed 00:09:06.827 Test: blockdev write read invalid size ...passed 00:09:06.827 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.827 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.827 Test: blockdev write read max offset ...passed 00:09:06.827 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.827 Test: blockdev writev readv 8 blocks ...passed 00:09:06.827 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.827 Test: blockdev writev readv block ...passed 00:09:06.827 Test: blockdev writev readv size > 128k ...passed 00:09:06.827 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.827 Test: blockdev comparev and writev ...[2024-07-20 15:44:41.609631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc022000 len:0x1000 00:09:06.828 passed 00:09:06.828 Test: blockdev nvme passthru rw ...[2024-07-20 15:44:41.609675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.828 passed 00:09:06.828 Test: blockdev nvme passthru vendor specific ...[2024-07-20 15:44:41.610516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.828 [2024-07-20 15:44:41.610549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.828 passed 00:09:06.828 Test: blockdev nvme admin passthru ...passed 00:09:06.828 Test: blockdev copy ...passed 00:09:06.828 Suite: bdevio tests on: Nvme2n1 00:09:06.828 Test: blockdev write read block ...passed 00:09:06.828 Test: blockdev write zeroes read block ...passed 00:09:06.828 Test: blockdev write zeroes read no split ...passed 00:09:07.087 Test: blockdev write zeroes read split ...passed 00:09:07.087 Test: blockdev write zeroes read split partial ...passed 00:09:07.087 Test: blockdev reset ...[2024-07-20 15:44:41.628690] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:07.087 [2024-07-20 15:44:41.630754] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:07.087 passed 00:09:07.087 Test: blockdev write read 8 blocks ...passed 00:09:07.087 Test: blockdev write read size > 128k ...passed 00:09:07.087 Test: blockdev write read invalid size ...passed 00:09:07.087 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:07.087 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:07.087 Test: blockdev write read max offset ...passed 00:09:07.087 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:07.087 Test: blockdev writev readv 8 blocks ...passed 00:09:07.087 Test: blockdev writev readv 30 x 1block ...passed 00:09:07.087 Test: blockdev writev readv block ...passed 00:09:07.087 Test: blockdev writev readv size > 128k ...passed 00:09:07.087 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:07.087 Test: blockdev comparev and writev ...[2024-07-20 15:44:41.637343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b920d000 len:0x1000 00:09:07.087 [2024-07-20 15:44:41.637394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:07.087 passed 00:09:07.087 Test: blockdev nvme passthru rw ...passed 00:09:07.087 Test: blockdev nvme passthru vendor specific ...[2024-07-20 15:44:41.638320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:07.087 [2024-07-20 15:44:41.638366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:07.087 passed 00:09:07.087 Test: blockdev nvme admin passthru ...passed 00:09:07.087 Test: blockdev copy ...passed 00:09:07.087 Suite: bdevio tests on: Nvme1n1 00:09:07.087 Test: blockdev write read block ...passed 00:09:07.087 Test: blockdev write zeroes read block ...passed 00:09:07.087 Test: blockdev write zeroes read no split ...passed 00:09:07.087 Test: blockdev write zeroes read split ...passed 00:09:07.087 Test: blockdev write zeroes read split partial ...passed 00:09:07.087 Test: blockdev reset ...[2024-07-20 15:44:41.657226] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:07.087 [2024-07-20 15:44:41.659026] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:07.087 passed 00:09:07.087 Test: blockdev write read 8 blocks ...passed 00:09:07.087 Test: blockdev write read size > 128k ...passed 00:09:07.087 Test: blockdev write read invalid size ...passed 00:09:07.087 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:07.087 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:07.087 Test: blockdev write read max offset ...passed 00:09:07.087 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:07.087 Test: blockdev writev readv 8 blocks ...passed 00:09:07.087 Test: blockdev writev readv 30 x 1block ...passed 00:09:07.087 Test: blockdev writev readv block ...passed 00:09:07.087 Test: blockdev writev readv size > 128k ...passed 00:09:07.087 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:07.087 Test: blockdev comparev and writev ...[2024-07-20 15:44:41.666036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8e32000 len:0x1000 00:09:07.087 [2024-07-20 15:44:41.666079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:07.087 passed 00:09:07.087 Test: blockdev nvme passthru rw ...passed 00:09:07.087 Test: blockdev nvme passthru vendor specific ...[2024-07-20 15:44:41.667028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:07.087 passed[2024-07-20 15:44:41.667066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:07.087 00:09:07.087 Test: blockdev nvme admin passthru ...passed 00:09:07.087 Test: blockdev copy ...passed 00:09:07.087 Suite: bdevio tests on: Nvme0n1p2 00:09:07.087 Test: blockdev write read block ...passed 00:09:07.087 Test: blockdev write zeroes read block ...passed 00:09:07.087 Test: blockdev write zeroes read no split ...passed 00:09:07.087 Test: blockdev write zeroes read split ...passed 00:09:07.087 Test: blockdev write zeroes read split partial ...passed 00:09:07.087 Test: blockdev reset ...[2024-07-20 15:44:41.686952] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:07.087 [2024-07-20 15:44:41.688829] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:07.087 passed 00:09:07.087 Test: blockdev write read 8 blocks ...passed 00:09:07.087 Test: blockdev write read size > 128k ...passed 00:09:07.087 Test: blockdev write read invalid size ...passed 00:09:07.087 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:07.087 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:07.087 Test: blockdev write read max offset ...passed 00:09:07.087 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:07.087 Test: blockdev writev readv 8 blocks ...passed 00:09:07.087 Test: blockdev writev readv 30 x 1block ...passed 00:09:07.087 Test: blockdev writev readv block ...passed 00:09:07.087 Test: blockdev writev readv size > 128k ...passed 00:09:07.087 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:07.087 Test: blockdev comparev and writev ...[2024-07-20 15:44:41.695038] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:07.087 separate metadata which is not supported yet. 00:09:07.087 passed 00:09:07.087 Test: blockdev nvme passthru rw ...passed 00:09:07.087 Test: blockdev nvme passthru vendor specific ...passed 00:09:07.088 Test: blockdev nvme admin passthru ...passed 00:09:07.088 Test: blockdev copy ...passed 00:09:07.088 Suite: bdevio tests on: Nvme0n1p1 00:09:07.088 Test: blockdev write read block ...passed 00:09:07.088 Test: blockdev write zeroes read block ...passed 00:09:07.088 Test: blockdev write zeroes read no split ...passed 00:09:07.088 Test: blockdev write zeroes read split ...passed 00:09:07.088 Test: blockdev write zeroes read split partial ...passed 00:09:07.088 Test: blockdev reset ...[2024-07-20 15:44:41.713374] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:07.088 [2024-07-20 15:44:41.715172] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:07.088 passed 00:09:07.088 Test: blockdev write read 8 blocks ...passed 00:09:07.088 Test: blockdev write read size > 128k ...passed 00:09:07.088 Test: blockdev write read invalid size ...passed 00:09:07.088 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:07.088 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:07.088 Test: blockdev write read max offset ...passed 00:09:07.088 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:07.088 Test: blockdev writev readv 8 blocks ...passed 00:09:07.088 Test: blockdev writev readv 30 x 1block ...passed 00:09:07.088 Test: blockdev writev readv block ...passed 00:09:07.088 Test: blockdev writev readv size > 128k ...passed 00:09:07.088 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:07.088 Test: blockdev comparev and writev ...passed 00:09:07.088 Test: blockdev nvme passthru rw ...passed 00:09:07.088 Test: blockdev nvme passthru vendor specific ...passed 00:09:07.088 Test: blockdev nvme admin passthru ...passed 00:09:07.088 Test: blockdev copy ...[2024-07-20 15:44:41.721475] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:07.088 separate metadata which is not supported yet. 00:09:07.088 passed 00:09:07.088 00:09:07.088 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.088 suites 7 7 n/a 0 0 00:09:07.088 tests 161 161 161 0 0 00:09:07.088 asserts 1006 1006 1006 0 n/a 00:09:07.088 00:09:07.088 Elapsed time = 0.440 seconds 00:09:07.088 0 00:09:07.088 15:44:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 78692 00:09:07.088 15:44:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 78692 ']' 00:09:07.088 15:44:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 78692 00:09:07.088 15:44:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:09:07.088 15:44:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:07.088 15:44:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78692 00:09:07.088 15:44:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:07.088 15:44:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:07.088 killing process with pid 78692 00:09:07.088 15:44:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78692' 00:09:07.088 15:44:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@965 -- # kill 78692 00:09:07.088 15:44:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # wait 78692 00:09:07.347 15:44:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:09:07.347 00:09:07.347 real 0m1.384s 00:09:07.347 user 0m3.266s 00:09:07.347 sys 0m0.371s 00:09:07.347 15:44:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:07.347 15:44:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:07.347 ************************************ 00:09:07.347 END TEST bdev_bounds 00:09:07.347 ************************************ 00:09:07.347 15:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:07.347 15:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:07.347 15:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:07.347 15:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:07.347 ************************************ 00:09:07.347 START TEST bdev_nbd 00:09:07.347 ************************************ 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=78741 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 78741 /var/tmp/spdk-nbd.sock 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 78741 ']' 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:07.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:07.347 15:44:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:07.606 [2024-07-20 15:44:42.154874] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:07.606 [2024-07-20 15:44:42.155087] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.606 [2024-07-20 15:44:42.299321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.606 [2024-07-20 15:44:42.339636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.174 15:44:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:08.174 15:44:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:09:08.174 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:08.174 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.174 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:08.174 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:08.174 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:08.174 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.174 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:08.174 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:08.174 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:08.174 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:08.175 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:08.175 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.175 15:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.433 1+0 records in 00:09:08.433 1+0 records out 00:09:08.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671114 s, 6.1 MB/s 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.433 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.692 1+0 records in 00:09:08.692 1+0 records out 00:09:08.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00079677 s, 5.1 MB/s 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.692 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.950 1+0 records in 00:09:08.950 1+0 records out 00:09:08.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00204707 s, 2.0 MB/s 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.950 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:09.207 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:09.207 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:09.207 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:09.207 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:09:09.207 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:09.207 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:09.207 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:09.207 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:09:09.207 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:09.207 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:09.207 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:09.207 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.207 1+0 records in 00:09:09.207 1+0 records out 00:09:09.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740444 s, 5.5 MB/s 00:09:09.207 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.207 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:09.208 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.208 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:09.208 15:44:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:09.208 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:09.208 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.208 15:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:09.465 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:09.465 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:09.465 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:09.465 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:09:09.465 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:09.465 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:09.465 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:09.465 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:09:09.465 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:09.465 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:09.465 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:09.465 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.466 1+0 records in 00:09:09.466 1+0 records out 00:09:09.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712915 s, 5.7 MB/s 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.466 1+0 records in 00:09:09.466 1+0 records out 00:09:09.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707413 s, 5.8 MB/s 00:09:09.466 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.723 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:09.723 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.724 1+0 records in 00:09:09.724 1+0 records out 00:09:09.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571144 s, 7.2 MB/s 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.724 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:09.981 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:09.981 { 00:09:09.981 "nbd_device": "/dev/nbd0", 00:09:09.981 "bdev_name": "Nvme0n1p1" 00:09:09.981 }, 00:09:09.981 { 00:09:09.981 "nbd_device": "/dev/nbd1", 00:09:09.981 "bdev_name": "Nvme0n1p2" 00:09:09.981 }, 00:09:09.981 { 00:09:09.981 "nbd_device": "/dev/nbd2", 00:09:09.981 "bdev_name": "Nvme1n1" 00:09:09.981 }, 00:09:09.981 { 00:09:09.981 "nbd_device": "/dev/nbd3", 00:09:09.981 "bdev_name": "Nvme2n1" 00:09:09.981 }, 00:09:09.981 { 00:09:09.981 "nbd_device": "/dev/nbd4", 00:09:09.981 "bdev_name": "Nvme2n2" 00:09:09.981 }, 00:09:09.981 { 00:09:09.981 "nbd_device": "/dev/nbd5", 00:09:09.981 "bdev_name": "Nvme2n3" 00:09:09.981 }, 00:09:09.981 { 00:09:09.981 "nbd_device": "/dev/nbd6", 00:09:09.981 "bdev_name": "Nvme3n1" 00:09:09.981 } 00:09:09.981 ]' 00:09:09.981 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:09.981 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:09.981 { 00:09:09.981 "nbd_device": "/dev/nbd0", 00:09:09.981 "bdev_name": "Nvme0n1p1" 00:09:09.981 }, 00:09:09.981 { 00:09:09.981 "nbd_device": "/dev/nbd1", 00:09:09.981 "bdev_name": "Nvme0n1p2" 00:09:09.981 }, 00:09:09.981 { 00:09:09.981 "nbd_device": "/dev/nbd2", 00:09:09.981 "bdev_name": "Nvme1n1" 00:09:09.981 }, 00:09:09.981 { 00:09:09.981 "nbd_device": "/dev/nbd3", 00:09:09.981 "bdev_name": "Nvme2n1" 00:09:09.981 }, 00:09:09.981 { 00:09:09.981 "nbd_device": "/dev/nbd4", 00:09:09.981 "bdev_name": "Nvme2n2" 00:09:09.981 }, 00:09:09.981 { 00:09:09.981 "nbd_device": "/dev/nbd5", 00:09:09.981 "bdev_name": "Nvme2n3" 00:09:09.981 }, 00:09:09.981 { 00:09:09.981 "nbd_device": "/dev/nbd6", 00:09:09.981 "bdev_name": "Nvme3n1" 00:09:09.981 } 00:09:09.981 ]' 00:09:09.981 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:09.981 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:09.981 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.981 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:09.981 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:09.981 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:09.981 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.981 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:10.238 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:10.238 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:10.238 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:10.238 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.238 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.238 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:10.238 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:10.238 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.238 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.238 15:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.495 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:10.753 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:10.753 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:10.753 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:10.753 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.753 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.753 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:10.753 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:10.753 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.753 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.753 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:11.012 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:11.012 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:11.012 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:11.012 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.012 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.012 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:11.012 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.012 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.012 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.012 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:11.270 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:11.270 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:11.270 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:11.270 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.270 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.270 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:11.270 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.270 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.270 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.270 15:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:11.270 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:11.270 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:11.270 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:11.270 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.270 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.270 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:11.270 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.270 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.270 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:11.270 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.270 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:11.528 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:11.786 /dev/nbd0 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:11.786 1+0 records in 00:09:11.786 1+0 records out 00:09:11.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072375 s, 5.7 MB/s 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:11.786 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:12.045 /dev/nbd1 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.045 1+0 records in 00:09:12.045 1+0 records out 00:09:12.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498921 s, 8.2 MB/s 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:12.045 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:12.304 /dev/nbd10 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.304 1+0 records in 00:09:12.304 1+0 records out 00:09:12.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571467 s, 7.2 MB/s 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:12.304 15:44:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:12.561 /dev/nbd11 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.561 1+0 records in 00:09:12.561 1+0 records out 00:09:12.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00074299 s, 5.5 MB/s 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:12.561 /dev/nbd12 00:09:12.561 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.819 1+0 records in 00:09:12.819 1+0 records out 00:09:12.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731575 s, 5.6 MB/s 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.819 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:12.820 /dev/nbd13 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.820 1+0 records in 00:09:12.820 1+0 records out 00:09:12.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834717 s, 4.9 MB/s 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:12.820 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:13.078 /dev/nbd14 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.078 1+0 records in 00:09:13.078 1+0 records out 00:09:13.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713601 s, 5.7 MB/s 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.078 15:44:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:13.336 { 00:09:13.336 "nbd_device": "/dev/nbd0", 00:09:13.336 "bdev_name": "Nvme0n1p1" 00:09:13.336 }, 00:09:13.336 { 00:09:13.336 "nbd_device": "/dev/nbd1", 00:09:13.336 "bdev_name": "Nvme0n1p2" 00:09:13.336 }, 00:09:13.336 { 00:09:13.336 "nbd_device": "/dev/nbd10", 00:09:13.336 "bdev_name": "Nvme1n1" 00:09:13.336 }, 00:09:13.336 { 00:09:13.336 "nbd_device": "/dev/nbd11", 00:09:13.336 "bdev_name": "Nvme2n1" 00:09:13.336 }, 00:09:13.336 { 00:09:13.336 "nbd_device": "/dev/nbd12", 00:09:13.336 "bdev_name": "Nvme2n2" 00:09:13.336 }, 00:09:13.336 { 00:09:13.336 "nbd_device": "/dev/nbd13", 00:09:13.336 "bdev_name": "Nvme2n3" 00:09:13.336 }, 00:09:13.336 { 00:09:13.336 "nbd_device": "/dev/nbd14", 00:09:13.336 "bdev_name": "Nvme3n1" 00:09:13.336 } 00:09:13.336 ]' 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:13.336 { 00:09:13.336 "nbd_device": "/dev/nbd0", 00:09:13.336 "bdev_name": "Nvme0n1p1" 00:09:13.336 }, 00:09:13.336 { 00:09:13.336 "nbd_device": "/dev/nbd1", 00:09:13.336 "bdev_name": "Nvme0n1p2" 00:09:13.336 }, 00:09:13.336 { 00:09:13.336 "nbd_device": "/dev/nbd10", 00:09:13.336 "bdev_name": "Nvme1n1" 00:09:13.336 }, 00:09:13.336 { 00:09:13.336 "nbd_device": "/dev/nbd11", 00:09:13.336 "bdev_name": "Nvme2n1" 00:09:13.336 }, 00:09:13.336 { 00:09:13.336 "nbd_device": "/dev/nbd12", 00:09:13.336 "bdev_name": "Nvme2n2" 00:09:13.336 }, 00:09:13.336 { 00:09:13.336 "nbd_device": "/dev/nbd13", 00:09:13.336 "bdev_name": "Nvme2n3" 00:09:13.336 }, 00:09:13.336 { 00:09:13.336 "nbd_device": "/dev/nbd14", 00:09:13.336 "bdev_name": "Nvme3n1" 00:09:13.336 } 00:09:13.336 ]' 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:13.336 /dev/nbd1 00:09:13.336 /dev/nbd10 00:09:13.336 /dev/nbd11 00:09:13.336 /dev/nbd12 00:09:13.336 /dev/nbd13 00:09:13.336 /dev/nbd14' 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:13.336 /dev/nbd1 00:09:13.336 /dev/nbd10 00:09:13.336 /dev/nbd11 00:09:13.336 /dev/nbd12 00:09:13.336 /dev/nbd13 00:09:13.336 /dev/nbd14' 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:13.336 256+0 records in 00:09:13.336 256+0 records out 00:09:13.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119706 s, 87.6 MB/s 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:13.336 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:13.594 256+0 records in 00:09:13.594 256+0 records out 00:09:13.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139663 s, 7.5 MB/s 00:09:13.594 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:13.594 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:13.852 256+0 records in 00:09:13.852 256+0 records out 00:09:13.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140428 s, 7.5 MB/s 00:09:13.852 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:13.852 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:13.852 256+0 records in 00:09:13.852 256+0 records out 00:09:13.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137694 s, 7.6 MB/s 00:09:13.852 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:13.852 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:14.110 256+0 records in 00:09:14.110 256+0 records out 00:09:14.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138349 s, 7.6 MB/s 00:09:14.110 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:14.110 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:14.110 256+0 records in 00:09:14.110 256+0 records out 00:09:14.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137524 s, 7.6 MB/s 00:09:14.110 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:14.110 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:14.368 256+0 records in 00:09:14.368 256+0 records out 00:09:14.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138483 s, 7.6 MB/s 00:09:14.368 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:14.368 15:44:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:14.368 256+0 records in 00:09:14.368 256+0 records out 00:09:14.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135052 s, 7.8 MB/s 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:14.368 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.626 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:14.885 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:14.885 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:14.885 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:14.885 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:14.885 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:14.885 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:14.885 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:14.885 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:14.885 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.885 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:15.142 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:15.143 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:15.143 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:15.143 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.143 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.143 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:15.143 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:15.143 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.143 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.143 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:15.412 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:15.412 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:15.412 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:15.412 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.412 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.412 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:15.412 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:15.412 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.412 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.412 15:44:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:15.412 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:15.704 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:15.704 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:15.704 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.704 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.704 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:15.704 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:15.704 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.704 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.704 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:15.705 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:15.705 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:15.705 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:15.705 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.705 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.705 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:15.705 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:15.705 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.705 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.705 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:16.003 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:16.003 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:16.003 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:16.003 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.003 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.003 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:16.003 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:16.003 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.003 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:16.003 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.003 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.261 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:16.262 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:16.262 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:16.262 15:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:16.262 malloc_lvol_verify 00:09:16.262 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:16.520 b5d71ad0-f3bb-4659-b9b3-09cfaf4204ae 00:09:16.520 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:16.778 d72e8703-674c-4082-b825-ce92e464f460 00:09:16.778 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:17.037 /dev/nbd0 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:17.037 mke2fs 1.46.5 (30-Dec-2021) 00:09:17.037 Discarding device blocks: 0/4096 done 00:09:17.037 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:17.037 00:09:17.037 Allocating group tables: 0/1 done 00:09:17.037 Writing inode tables: 0/1 done 00:09:17.037 Creating journal (1024 blocks): done 00:09:17.037 Writing superblocks and filesystem accounting information: 0/1 done 00:09:17.037 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.037 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 78741 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 78741 ']' 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 78741 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78741 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:17.296 killing process with pid 78741 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78741' 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@965 -- # kill 78741 00:09:17.296 15:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # wait 78741 00:09:17.567 15:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:09:17.567 00:09:17.567 real 0m10.070s 00:09:17.567 user 0m13.059s 00:09:17.567 sys 0m4.697s 00:09:17.567 15:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:17.567 ************************************ 00:09:17.567 END TEST bdev_nbd 00:09:17.567 ************************************ 00:09:17.567 15:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:17.567 15:44:52 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:09:17.567 15:44:52 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:09:17.567 15:44:52 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:09:17.567 skipping fio tests on NVMe due to multi-ns failures. 00:09:17.567 15:44:52 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:17.567 15:44:52 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:17.567 15:44:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:17.567 15:44:52 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:17.567 15:44:52 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:17.567 15:44:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:17.567 ************************************ 00:09:17.567 START TEST bdev_verify 00:09:17.567 ************************************ 00:09:17.567 15:44:52 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:17.567 [2024-07-20 15:44:52.289379] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:17.567 [2024-07-20 15:44:52.289516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79138 ] 00:09:17.826 [2024-07-20 15:44:52.439682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:17.826 [2024-07-20 15:44:52.482475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.826 [2024-07-20 15:44:52.482569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.393 Running I/O for 5 seconds... 00:09:23.654 00:09:23.654 Latency(us) 00:09:23.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.654 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.654 Verification LBA range: start 0x0 length 0x5e800 00:09:23.654 Nvme0n1p1 : 5.07 1490.63 5.82 0.00 0.00 85696.91 18213.22 84222.97 00:09:23.654 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.654 Verification LBA range: start 0x5e800 length 0x5e800 00:09:23.654 Nvme0n1p1 : 5.08 1486.99 5.81 0.00 0.00 85564.04 10264.67 72010.64 00:09:23.654 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.654 Verification LBA range: start 0x0 length 0x5e7ff 00:09:23.654 Nvme0n1p2 : 5.07 1490.20 5.82 0.00 0.00 85484.65 17792.10 72852.87 00:09:23.654 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.654 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:23.654 Nvme0n1p2 : 5.08 1486.56 5.81 0.00 0.00 85447.62 10264.67 74116.22 00:09:23.654 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.654 Verification LBA range: start 0x0 length 0xa0000 00:09:23.654 Nvme1n1 : 5.07 1489.79 5.82 0.00 0.00 85315.13 18318.50 65693.92 00:09:23.654 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.654 Verification LBA range: start 0xa0000 length 0xa0000 00:09:23.654 Nvme1n1 : 5.08 1486.23 5.81 0.00 0.00 85307.35 10369.95 75379.56 00:09:23.654 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.654 Verification LBA range: start 0x0 length 0x80000 00:09:23.654 Nvme2n1 : 5.07 1489.40 5.82 0.00 0.00 85192.72 17370.99 68220.61 00:09:23.654 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.654 Verification LBA range: start 0x80000 length 0x80000 00:09:23.654 Nvme2n1 : 5.08 1485.91 5.80 0.00 0.00 85167.26 10475.23 76642.90 00:09:23.654 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.654 Verification LBA range: start 0x0 length 0x80000 00:09:23.654 Nvme2n2 : 5.07 1489.00 5.82 0.00 0.00 85057.21 16634.04 70326.18 00:09:23.654 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.654 Verification LBA range: start 0x80000 length 0x80000 00:09:23.654 Nvme2n2 : 5.06 1479.07 5.78 0.00 0.00 86163.27 10317.31 77485.13 00:09:23.654 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.654 Verification LBA range: start 0x0 length 0x80000 00:09:23.654 Nvme2n3 : 5.07 1488.56 5.81 0.00 0.00 84925.33 16318.20 74116.22 00:09:23.654 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.654 Verification LBA range: start 0x80000 length 0x80000 00:09:23.654 Nvme2n3 : 5.08 1487.72 5.81 0.00 0.00 85801.57 11264.82 74116.22 00:09:23.654 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:23.654 Verification LBA range: start 0x0 length 0x20000 00:09:23.654 Nvme3n1 : 5.08 1499.74 5.86 0.00 0.00 84239.83 1947.66 74537.33 00:09:23.654 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:23.654 Verification LBA range: start 0x20000 length 0x20000 00:09:23.654 Nvme3n1 : 5.08 1487.34 5.81 0.00 0.00 85674.37 11370.10 70326.18 00:09:23.654 =================================================================================================================== 00:09:23.654 Total : 20837.14 81.40 0.00 0.00 85358.64 1947.66 84222.97 00:09:23.654 00:09:23.654 real 0m6.165s 00:09:23.654 user 0m11.493s 00:09:23.654 sys 0m0.277s 00:09:23.654 15:44:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:23.654 15:44:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:23.654 ************************************ 00:09:23.654 END TEST bdev_verify 00:09:23.654 ************************************ 00:09:23.654 15:44:58 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:23.654 15:44:58 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:23.654 15:44:58 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:23.654 15:44:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:23.913 ************************************ 00:09:23.913 START TEST bdev_verify_big_io 00:09:23.913 ************************************ 00:09:23.913 15:44:58 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:23.913 [2024-07-20 15:44:58.537512] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:23.913 [2024-07-20 15:44:58.537645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79230 ] 00:09:23.913 [2024-07-20 15:44:58.688520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:24.171 [2024-07-20 15:44:58.743292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.171 [2024-07-20 15:44:58.743421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.429 Running I/O for 5 seconds... 00:09:30.993 00:09:30.993 Latency(us) 00:09:30.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.993 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.993 Verification LBA range: start 0x0 length 0x5e80 00:09:30.993 Nvme0n1p1 : 5.65 136.51 8.53 0.00 0.00 895026.31 24845.78 889394.58 00:09:30.993 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.993 Verification LBA range: start 0x5e80 length 0x5e80 00:09:30.993 Nvme0n1p1 : 5.70 147.27 9.20 0.00 0.00 776505.85 49059.88 1105005.39 00:09:30.993 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.993 Verification LBA range: start 0x0 length 0x5e7f 00:09:30.993 Nvme0n1p2 : 5.70 143.44 8.96 0.00 0.00 840957.61 57692.74 1037627.01 00:09:30.993 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.993 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:30.993 Nvme0n1p2 : 5.77 151.01 9.44 0.00 0.00 742383.48 24951.06 1569916.20 00:09:30.993 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.993 Verification LBA range: start 0x0 length 0xa000 00:09:30.993 Nvme1n1 : 5.73 151.24 9.45 0.00 0.00 786192.18 45480.40 889394.58 00:09:30.993 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.993 Verification LBA range: start 0xa000 length 0xa000 00:09:30.993 Nvme1n1 : 5.78 163.31 10.21 0.00 0.00 674087.32 1250.18 1590129.71 00:09:30.993 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.993 Verification LBA range: start 0x0 length 0x8000 00:09:30.993 Nvme2n1 : 5.73 152.78 9.55 0.00 0.00 763243.76 45690.96 828754.04 00:09:30.993 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.993 Verification LBA range: start 0x8000 length 0x8000 00:09:30.993 Nvme2n1 : 5.59 141.39 8.84 0.00 0.00 871682.84 40216.47 896132.42 00:09:30.993 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.993 Verification LBA range: start 0x0 length 0x8000 00:09:30.993 Nvme2n2 : 5.74 156.21 9.76 0.00 0.00 733434.98 35584.21 778220.26 00:09:30.993 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.993 Verification LBA range: start 0x8000 length 0x8000 00:09:30.993 Nvme2n2 : 5.65 147.34 9.21 0.00 0.00 829531.99 54744.93 747899.99 00:09:30.993 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.993 Verification LBA range: start 0x0 length 0x8000 00:09:30.993 Nvme2n3 : 5.77 159.00 9.94 0.00 0.00 703321.68 27372.47 798433.77 00:09:30.993 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.993 Verification LBA range: start 0x8000 length 0x8000 00:09:30.993 Nvme2n3 : 5.65 147.26 9.20 0.00 0.00 810930.81 56008.28 700735.13 00:09:30.993 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.993 Verification LBA range: start 0x0 length 0x2000 00:09:30.993 Nvme3n1 : 5.80 176.86 11.05 0.00 0.00 621666.92 2724.09 811909.45 00:09:30.993 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.993 Verification LBA range: start 0x2000 length 0x2000 00:09:30.993 Nvme3n1 : 5.70 147.15 9.20 0.00 0.00 795376.17 49481.00 1084791.88 00:09:30.993 =================================================================================================================== 00:09:30.993 Total : 2120.77 132.55 0.00 0.00 769382.81 1250.18 1590129.71 00:09:30.993 00:09:30.993 real 0m7.276s 00:09:30.993 user 0m13.651s 00:09:30.993 sys 0m0.319s 00:09:30.993 15:45:05 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:30.993 15:45:05 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:30.993 ************************************ 00:09:30.993 END TEST bdev_verify_big_io 00:09:30.993 ************************************ 00:09:31.252 15:45:05 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:31.252 15:45:05 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:31.252 15:45:05 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:31.252 15:45:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:31.252 ************************************ 00:09:31.252 START TEST bdev_write_zeroes 00:09:31.252 ************************************ 00:09:31.252 15:45:05 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:31.252 [2024-07-20 15:45:05.888604] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:31.252 [2024-07-20 15:45:05.888743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79329 ] 00:09:31.252 [2024-07-20 15:45:06.038944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.510 [2024-07-20 15:45:06.089593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.767 Running I/O for 1 seconds... 00:09:33.140 00:09:33.140 Latency(us) 00:09:33.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.140 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.140 Nvme0n1p1 : 1.01 9787.05 38.23 0.00 0.00 13041.04 9264.53 29688.60 00:09:33.140 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.140 Nvme0n1p2 : 1.01 9776.27 38.19 0.00 0.00 13036.29 9317.17 29688.60 00:09:33.140 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.140 Nvme1n1 : 1.02 9766.73 38.15 0.00 0.00 13025.99 9685.64 29688.60 00:09:33.140 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.140 Nvme2n1 : 1.02 9757.73 38.12 0.00 0.00 13015.73 9843.56 28214.70 00:09:33.140 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.140 Nvme2n2 : 1.02 9793.27 38.25 0.00 0.00 12938.53 7843.26 24845.78 00:09:33.140 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.140 Nvme2n3 : 1.02 9825.85 38.38 0.00 0.00 12815.42 4369.07 19371.28 00:09:33.140 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.140 Nvme3n1 : 1.02 9866.12 38.54 0.00 0.00 12716.87 3711.07 17055.15 00:09:33.140 =================================================================================================================== 00:09:33.140 Total : 68573.01 267.86 0.00 0.00 12940.56 3711.07 29688.60 00:09:33.140 00:09:33.140 real 0m1.952s 00:09:33.140 user 0m1.602s 00:09:33.140 sys 0m0.240s 00:09:33.140 15:45:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:33.140 15:45:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:33.140 ************************************ 00:09:33.140 END TEST bdev_write_zeroes 00:09:33.140 ************************************ 00:09:33.140 15:45:07 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:33.140 15:45:07 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:33.140 15:45:07 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:33.140 15:45:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:33.140 ************************************ 00:09:33.140 START TEST bdev_json_nonenclosed 00:09:33.140 ************************************ 00:09:33.140 15:45:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:33.140 [2024-07-20 15:45:07.911533] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:33.140 [2024-07-20 15:45:07.911710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79371 ] 00:09:33.400 [2024-07-20 15:45:08.061037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.400 [2024-07-20 15:45:08.112719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.400 [2024-07-20 15:45:08.112830] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:33.400 [2024-07-20 15:45:08.112864] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:33.400 [2024-07-20 15:45:08.112877] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:33.660 00:09:33.660 real 0m0.392s 00:09:33.660 user 0m0.154s 00:09:33.660 sys 0m0.135s 00:09:33.660 15:45:08 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:33.660 ************************************ 00:09:33.660 15:45:08 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:33.660 END TEST bdev_json_nonenclosed 00:09:33.660 ************************************ 00:09:33.660 15:45:08 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:33.660 15:45:08 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:33.660 15:45:08 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:33.660 15:45:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:33.660 ************************************ 00:09:33.660 START TEST bdev_json_nonarray 00:09:33.660 ************************************ 00:09:33.660 15:45:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:33.660 [2024-07-20 15:45:08.379761] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:33.660 [2024-07-20 15:45:08.379898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79391 ] 00:09:33.919 [2024-07-20 15:45:08.529305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.919 [2024-07-20 15:45:08.582478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.919 [2024-07-20 15:45:08.582594] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:33.919 [2024-07-20 15:45:08.582620] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:33.919 [2024-07-20 15:45:08.582633] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:33.919 00:09:33.919 real 0m0.394s 00:09:33.919 user 0m0.157s 00:09:33.919 sys 0m0.133s 00:09:33.919 15:45:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:33.919 ************************************ 00:09:33.919 15:45:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:33.919 END TEST bdev_json_nonarray 00:09:33.919 ************************************ 00:09:34.179 15:45:08 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:09:34.179 15:45:08 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:09:34.179 15:45:08 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:34.179 15:45:08 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:34.179 15:45:08 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.179 15:45:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.179 ************************************ 00:09:34.179 START TEST bdev_gpt_uuid 00:09:34.179 ************************************ 00:09:34.179 15:45:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1121 -- # bdev_gpt_uuid 00:09:34.179 15:45:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:09:34.179 15:45:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:09:34.179 15:45:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=79421 00:09:34.179 15:45:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:34.179 15:45:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:34.179 15:45:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 79421 00:09:34.179 15:45:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@827 -- # '[' -z 79421 ']' 00:09:34.179 15:45:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.179 15:45:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:34.179 15:45:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.179 15:45:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:34.179 15:45:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:34.179 [2024-07-20 15:45:08.865826] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:34.179 [2024-07-20 15:45:08.865956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79421 ] 00:09:34.439 [2024-07-20 15:45:09.015001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.439 [2024-07-20 15:45:09.061135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.046 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:35.046 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # return 0 00:09:35.046 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:35.046 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.046 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:35.305 Some configs were skipped because the RPC state that can call them passed over. 00:09:35.305 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.305 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:09:35.305 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.305 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:35.305 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.305 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:35.305 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.305 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:35.305 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.305 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:09:35.305 { 00:09:35.305 "name": "Nvme0n1p1", 00:09:35.305 "aliases": [ 00:09:35.305 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:35.305 ], 00:09:35.305 "product_name": "GPT Disk", 00:09:35.305 "block_size": 4096, 00:09:35.305 "num_blocks": 774144, 00:09:35.305 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:35.305 "md_size": 64, 00:09:35.305 "md_interleave": false, 00:09:35.305 "dif_type": 0, 00:09:35.305 "assigned_rate_limits": { 00:09:35.305 "rw_ios_per_sec": 0, 00:09:35.305 "rw_mbytes_per_sec": 0, 00:09:35.305 "r_mbytes_per_sec": 0, 00:09:35.305 "w_mbytes_per_sec": 0 00:09:35.305 }, 00:09:35.305 "claimed": false, 00:09:35.305 "zoned": false, 00:09:35.305 "supported_io_types": { 00:09:35.305 "read": true, 00:09:35.305 "write": true, 00:09:35.305 "unmap": true, 00:09:35.305 "write_zeroes": true, 00:09:35.305 "flush": true, 00:09:35.305 "reset": true, 00:09:35.305 "compare": true, 00:09:35.305 "compare_and_write": false, 00:09:35.305 "abort": true, 00:09:35.305 "nvme_admin": false, 00:09:35.305 "nvme_io": false 00:09:35.305 }, 00:09:35.305 "driver_specific": { 00:09:35.305 "gpt": { 00:09:35.305 "base_bdev": "Nvme0n1", 00:09:35.305 "offset_blocks": 256, 00:09:35.305 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:35.305 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:35.305 "partition_name": "SPDK_TEST_first" 00:09:35.305 } 00:09:35.305 } 00:09:35.305 } 00:09:35.305 ]' 00:09:35.305 15:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:09:35.305 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:09:35.305 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:09:35.305 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:35.305 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:35.305 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:35.305 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:35.305 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.305 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:35.564 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.564 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:09:35.564 { 00:09:35.564 "name": "Nvme0n1p2", 00:09:35.564 "aliases": [ 00:09:35.564 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:35.564 ], 00:09:35.565 "product_name": "GPT Disk", 00:09:35.565 "block_size": 4096, 00:09:35.565 "num_blocks": 774143, 00:09:35.565 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:35.565 "md_size": 64, 00:09:35.565 "md_interleave": false, 00:09:35.565 "dif_type": 0, 00:09:35.565 "assigned_rate_limits": { 00:09:35.565 "rw_ios_per_sec": 0, 00:09:35.565 "rw_mbytes_per_sec": 0, 00:09:35.565 "r_mbytes_per_sec": 0, 00:09:35.565 "w_mbytes_per_sec": 0 00:09:35.565 }, 00:09:35.565 "claimed": false, 00:09:35.565 "zoned": false, 00:09:35.565 "supported_io_types": { 00:09:35.565 "read": true, 00:09:35.565 "write": true, 00:09:35.565 "unmap": true, 00:09:35.565 "write_zeroes": true, 00:09:35.565 "flush": true, 00:09:35.565 "reset": true, 00:09:35.565 "compare": true, 00:09:35.565 "compare_and_write": false, 00:09:35.565 "abort": true, 00:09:35.565 "nvme_admin": false, 00:09:35.565 "nvme_io": false 00:09:35.565 }, 00:09:35.565 "driver_specific": { 00:09:35.565 "gpt": { 00:09:35.565 "base_bdev": "Nvme0n1", 00:09:35.565 "offset_blocks": 774400, 00:09:35.565 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:35.565 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:35.565 "partition_name": "SPDK_TEST_second" 00:09:35.565 } 00:09:35.565 } 00:09:35.565 } 00:09:35.565 ]' 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 79421 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@946 -- # '[' -z 79421 ']' 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # kill -0 79421 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # uname 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79421 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:35.565 killing process with pid 79421 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79421' 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@965 -- # kill 79421 00:09:35.565 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # wait 79421 00:09:36.133 00:09:36.133 real 0m1.866s 00:09:36.134 user 0m1.944s 00:09:36.134 sys 0m0.446s 00:09:36.134 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.134 15:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:36.134 ************************************ 00:09:36.134 END TEST bdev_gpt_uuid 00:09:36.134 ************************************ 00:09:36.134 15:45:10 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:09:36.134 15:45:10 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:36.134 15:45:10 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:09:36.134 15:45:10 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:36.134 15:45:10 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:36.134 15:45:10 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:36.134 15:45:10 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:36.134 15:45:10 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:36.134 15:45:10 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:36.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:36.961 Waiting for block devices as requested 00:09:36.961 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.961 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.220 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.220 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.488 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:42.488 15:45:16 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:09:42.488 15:45:16 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:09:42.488 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:42.488 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:42.488 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:42.488 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:09:42.488 15:45:17 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:42.488 00:09:42.488 real 0m49.462s 00:09:42.488 user 0m59.249s 00:09:42.488 sys 0m10.919s 00:09:42.488 15:45:17 blockdev_nvme_gpt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:42.488 15:45:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:42.488 ************************************ 00:09:42.488 END TEST blockdev_nvme_gpt 00:09:42.488 ************************************ 00:09:42.747 15:45:17 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:42.747 15:45:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:42.747 15:45:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:42.747 15:45:17 -- common/autotest_common.sh@10 -- # set +x 00:09:42.747 ************************************ 00:09:42.747 START TEST nvme 00:09:42.747 ************************************ 00:09:42.747 15:45:17 nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:42.747 * Looking for test storage... 00:09:42.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:42.747 15:45:17 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:43.681 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.247 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.247 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.247 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.247 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.507 15:45:19 nvme -- nvme/nvme.sh@79 -- # uname 00:09:44.507 15:45:19 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:44.507 15:45:19 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:44.507 15:45:19 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:44.507 15:45:19 nvme -- common/autotest_common.sh@1078 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:44.507 15:45:19 nvme -- common/autotest_common.sh@1064 -- # _randomize_va_space=2 00:09:44.507 15:45:19 nvme -- common/autotest_common.sh@1065 -- # echo 0 00:09:44.507 15:45:19 nvme -- common/autotest_common.sh@1067 -- # stubpid=80046 00:09:44.507 Waiting for stub to ready for secondary processes... 00:09:44.507 15:45:19 nvme -- common/autotest_common.sh@1066 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:44.507 15:45:19 nvme -- common/autotest_common.sh@1068 -- # echo Waiting for stub to ready for secondary processes... 00:09:44.507 15:45:19 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:44.507 15:45:19 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/80046 ]] 00:09:44.507 15:45:19 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:09:44.507 [2024-07-20 15:45:19.180856] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:44.507 [2024-07-20 15:45:19.180976] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:45.442 15:45:20 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:45.442 15:45:20 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/80046 ]] 00:09:45.442 15:45:20 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:09:45.442 [2024-07-20 15:45:20.177215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:45.442 [2024-07-20 15:45:20.208311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.442 [2024-07-20 15:45:20.208454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.442 [2024-07-20 15:45:20.208611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.442 [2024-07-20 15:45:20.221514] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:45.442 [2024-07-20 15:45:20.221555] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:45.442 [2024-07-20 15:45:20.235897] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:45.442 [2024-07-20 15:45:20.236370] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:45.442 [2024-07-20 15:45:20.237136] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:45.701 [2024-07-20 15:45:20.237328] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:45.701 [2024-07-20 15:45:20.237414] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:45.701 [2024-07-20 15:45:20.238043] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:45.701 [2024-07-20 15:45:20.238385] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:45.701 [2024-07-20 15:45:20.238448] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:45.701 [2024-07-20 15:45:20.239347] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:45.701 [2024-07-20 15:45:20.239624] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:45.701 [2024-07-20 15:45:20.239718] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:45.701 [2024-07-20 15:45:20.239791] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:45.701 [2024-07-20 15:45:20.239866] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:46.636 done. 00:09:46.636 15:45:21 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:46.636 15:45:21 nvme -- common/autotest_common.sh@1074 -- # echo done. 00:09:46.636 15:45:21 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:46.636 15:45:21 nvme -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:09:46.636 15:45:21 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:46.636 15:45:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:46.636 ************************************ 00:09:46.636 START TEST nvme_reset 00:09:46.636 ************************************ 00:09:46.636 15:45:21 nvme.nvme_reset -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:46.895 Initializing NVMe Controllers 00:09:46.895 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:46.895 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:46.895 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:46.895 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:46.895 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:46.895 ************************************ 00:09:46.895 END TEST nvme_reset 00:09:46.895 ************************************ 00:09:46.895 00:09:46.895 real 0m0.307s 00:09:46.895 user 0m0.106s 00:09:46.895 sys 0m0.151s 00:09:46.895 15:45:21 nvme.nvme_reset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:46.895 15:45:21 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:46.895 15:45:21 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:46.895 15:45:21 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:46.895 15:45:21 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:46.895 15:45:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:46.895 ************************************ 00:09:46.895 START TEST nvme_identify 00:09:46.895 ************************************ 00:09:46.895 15:45:21 nvme.nvme_identify -- common/autotest_common.sh@1121 -- # nvme_identify 00:09:46.895 15:45:21 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:46.895 15:45:21 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:46.895 15:45:21 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:46.895 15:45:21 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:46.895 15:45:21 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:46.895 15:45:21 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # local bdfs 00:09:46.895 15:45:21 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:46.895 15:45:21 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:46.895 15:45:21 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:09:46.895 15:45:21 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:09:46.895 15:45:21 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:46.895 15:45:21 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:47.156 [2024-07-20 15:45:21.857312] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 80079 terminated unexpected 00:09:47.156 ===================================================== 00:09:47.156 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:47.156 ===================================================== 00:09:47.156 Controller Capabilities/Features 00:09:47.156 ================================ 00:09:47.156 Vendor ID: 1b36 00:09:47.156 Subsystem Vendor ID: 1af4 00:09:47.156 Serial Number: 12340 00:09:47.156 Model Number: QEMU NVMe Ctrl 00:09:47.156 Firmware Version: 8.0.0 00:09:47.156 Recommended Arb Burst: 6 00:09:47.156 IEEE OUI Identifier: 00 54 52 00:09:47.156 Multi-path I/O 00:09:47.156 May have multiple subsystem ports: No 00:09:47.156 May have multiple controllers: No 00:09:47.156 Associated with SR-IOV VF: No 00:09:47.156 Max Data Transfer Size: 524288 00:09:47.156 Max Number of Namespaces: 256 00:09:47.156 Max Number of I/O Queues: 64 00:09:47.156 NVMe Specification Version (VS): 1.4 00:09:47.156 NVMe Specification Version (Identify): 1.4 00:09:47.156 Maximum Queue Entries: 2048 00:09:47.156 Contiguous Queues Required: Yes 00:09:47.156 Arbitration Mechanisms Supported 00:09:47.156 Weighted Round Robin: Not Supported 00:09:47.156 Vendor Specific: Not Supported 00:09:47.156 Reset Timeout: 7500 ms 00:09:47.156 Doorbell Stride: 4 bytes 00:09:47.156 NVM Subsystem Reset: Not Supported 00:09:47.156 Command Sets Supported 00:09:47.156 NVM Command Set: Supported 00:09:47.156 Boot Partition: Not Supported 00:09:47.156 Memory Page Size Minimum: 4096 bytes 00:09:47.156 Memory Page Size Maximum: 65536 bytes 00:09:47.156 Persistent Memory Region: Not Supported 00:09:47.156 Optional Asynchronous Events Supported 00:09:47.156 Namespace Attribute Notices: Supported 00:09:47.156 Firmware Activation Notices: Not Supported 00:09:47.156 ANA Change Notices: Not Supported 00:09:47.156 PLE Aggregate Log Change Notices: Not Supported 00:09:47.156 LBA Status Info Alert Notices: Not Supported 00:09:47.156 EGE Aggregate Log Change Notices: Not Supported 00:09:47.156 Normal NVM Subsystem Shutdown event: Not Supported 00:09:47.156 Zone Descriptor Change Notices: Not Supported 00:09:47.156 Discovery Log Change Notices: Not Supported 00:09:47.156 Controller Attributes 00:09:47.156 128-bit Host Identifier: Not Supported 00:09:47.156 Non-Operational Permissive Mode: Not Supported 00:09:47.156 NVM Sets: Not Supported 00:09:47.156 Read Recovery Levels: Not Supported 00:09:47.156 Endurance Groups: Not Supported 00:09:47.156 Predictable Latency Mode: Not Supported 00:09:47.156 Traffic Based Keep ALive: Not Supported 00:09:47.156 Namespace Granularity: Not Supported 00:09:47.156 SQ Associations: Not Supported 00:09:47.156 UUID List: Not Supported 00:09:47.156 Multi-Domain Subsystem: Not Supported 00:09:47.156 Fixed Capacity Management: Not Supported 00:09:47.156 Variable Capacity Management: Not Supported 00:09:47.156 Delete Endurance Group: Not Supported 00:09:47.156 Delete NVM Set: Not Supported 00:09:47.156 Extended LBA Formats Supported: Supported 00:09:47.156 Flexible Data Placement Supported: Not Supported 00:09:47.156 00:09:47.156 Controller Memory Buffer Support 00:09:47.156 ================================ 00:09:47.156 Supported: No 00:09:47.156 00:09:47.156 Persistent Memory Region Support 00:09:47.156 ================================ 00:09:47.156 Supported: No 00:09:47.156 00:09:47.156 Admin Command Set Attributes 00:09:47.156 ============================ 00:09:47.156 Security Send/Receive: Not Supported 00:09:47.156 Format NVM: Supported 00:09:47.156 Firmware Activate/Download: Not Supported 00:09:47.156 Namespace Management: Supported 00:09:47.156 Device Self-Test: Not Supported 00:09:47.156 Directives: Supported 00:09:47.156 NVMe-MI: Not Supported 00:09:47.156 Virtualization Management: Not Supported 00:09:47.156 Doorbell Buffer Config: Supported 00:09:47.156 Get LBA Status Capability: Not Supported 00:09:47.156 Command & Feature Lockdown Capability: Not Supported 00:09:47.156 Abort Command Limit: 4 00:09:47.156 Async Event Request Limit: 4 00:09:47.156 Number of Firmware Slots: N/A 00:09:47.156 Firmware Slot 1 Read-Only: N/A 00:09:47.156 Firmware Activation Without Reset: N/A 00:09:47.156 Multiple Update Detection Support: N/A 00:09:47.156 Firmware Update Granularity: No Information Provided 00:09:47.156 Per-Namespace SMART Log: Yes 00:09:47.156 Asymmetric Namespace Access Log Page: Not Supported 00:09:47.156 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:47.156 Command Effects Log Page: Supported 00:09:47.156 Get Log Page Extended Data: Supported 00:09:47.156 Telemetry Log Pages: Not Supported 00:09:47.156 Persistent Event Log Pages: Not Supported 00:09:47.156 Supported Log Pages Log Page: May Support 00:09:47.156 Commands Supported & Effects Log Page: Not Supported 00:09:47.156 Feature Identifiers & Effects Log Page:May Support 00:09:47.156 NVMe-MI Commands & Effects Log Page: May Support 00:09:47.156 Data Area 4 for Telemetry Log: Not Supported 00:09:47.156 Error Log Page Entries Supported: 1 00:09:47.156 Keep Alive: Not Supported 00:09:47.156 00:09:47.156 NVM Command Set Attributes 00:09:47.156 ========================== 00:09:47.156 Submission Queue Entry Size 00:09:47.156 Max: 64 00:09:47.156 Min: 64 00:09:47.156 Completion Queue Entry Size 00:09:47.156 Max: 16 00:09:47.156 Min: 16 00:09:47.156 Number of Namespaces: 256 00:09:47.156 Compare Command: Supported 00:09:47.156 Write Uncorrectable Command: Not Supported 00:09:47.156 Dataset Management Command: Supported 00:09:47.156 Write Zeroes Command: Supported 00:09:47.156 Set Features Save Field: Supported 00:09:47.156 Reservations: Not Supported 00:09:47.156 Timestamp: Supported 00:09:47.156 Copy: Supported 00:09:47.156 Volatile Write Cache: Present 00:09:47.156 Atomic Write Unit (Normal): 1 00:09:47.156 Atomic Write Unit (PFail): 1 00:09:47.156 Atomic Compare & Write Unit: 1 00:09:47.156 Fused Compare & Write: Not Supported 00:09:47.156 Scatter-Gather List 00:09:47.156 SGL Command Set: Supported 00:09:47.156 SGL Keyed: Not Supported 00:09:47.156 SGL Bit Bucket Descriptor: Not Supported 00:09:47.156 SGL Metadata Pointer: Not Supported 00:09:47.156 Oversized SGL: Not Supported 00:09:47.156 SGL Metadata Address: Not Supported 00:09:47.156 SGL Offset: Not Supported 00:09:47.156 Transport SGL Data Block: Not Supported 00:09:47.157 Replay Protected Memory Block: Not Supported 00:09:47.157 00:09:47.157 Firmware Slot Information 00:09:47.157 ========================= 00:09:47.157 Active slot: 1 00:09:47.157 Slot 1 Firmware Revision: 1.0 00:09:47.157 00:09:47.157 00:09:47.157 Commands Supported and Effects 00:09:47.157 ============================== 00:09:47.157 Admin Commands 00:09:47.157 -------------- 00:09:47.157 Delete I/O Submission Queue (00h): Supported 00:09:47.157 Create I/O Submission Queue (01h): Supported 00:09:47.157 Get Log Page (02h): Supported 00:09:47.157 Delete I/O Completion Queue (04h): Supported 00:09:47.157 Create I/O Completion Queue (05h): Supported 00:09:47.157 Identify (06h): Supported 00:09:47.157 Abort (08h): Supported 00:09:47.157 Set Features (09h): Supported 00:09:47.157 Get Features (0Ah): Supported 00:09:47.157 Asynchronous Event Request (0Ch): Supported 00:09:47.157 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:47.157 Directive Send (19h): Supported 00:09:47.157 Directive Receive (1Ah): Supported 00:09:47.157 Virtualization Management (1Ch): Supported 00:09:47.157 Doorbell Buffer Config (7Ch): Supported 00:09:47.157 Format NVM (80h): Supported LBA-Change 00:09:47.157 I/O Commands 00:09:47.157 ------------ 00:09:47.157 Flush (00h): Supported LBA-Change 00:09:47.157 Write (01h): Supported LBA-Change 00:09:47.157 Read (02h): Supported 00:09:47.157 Compare (05h): Supported 00:09:47.157 Write Zeroes (08h): Supported LBA-Change 00:09:47.157 Dataset Management (09h): Supported LBA-Change 00:09:47.157 Unknown (0Ch): Supported 00:09:47.157 Unknown (12h): Supported 00:09:47.157 Copy (19h): Supported LBA-Change 00:09:47.157 Unknown (1Dh): Supported LBA-Change 00:09:47.157 00:09:47.157 Error Log 00:09:47.157 ========= 00:09:47.157 00:09:47.157 Arbitration 00:09:47.157 =========== 00:09:47.157 Arbitration Burst: no limit 00:09:47.157 00:09:47.157 Power Management 00:09:47.157 ================ 00:09:47.157 Number of Power States: 1 00:09:47.157 Current Power State: Power State #0 00:09:47.157 Power State #0: 00:09:47.157 Max Power: 25.00 W 00:09:47.157 Non-Operational State: Operational 00:09:47.157 Entry Latency: 16 microseconds 00:09:47.157 Exit Latency: 4 microseconds 00:09:47.157 Relative Read Throughput: 0 00:09:47.157 Relative Read Latency: 0 00:09:47.157 Relative Write Throughput: 0 00:09:47.157 Relative Write Latency: 0 00:09:47.157 Idle Power[2024-07-20 15:45:21.858833] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 80079 terminated unexpected 00:09:47.157 : Not Reported 00:09:47.157 Active Power: Not Reported 00:09:47.157 Non-Operational Permissive Mode: Not Supported 00:09:47.157 00:09:47.157 Health Information 00:09:47.157 ================== 00:09:47.157 Critical Warnings: 00:09:47.157 Available Spare Space: OK 00:09:47.157 Temperature: OK 00:09:47.157 Device Reliability: OK 00:09:47.157 Read Only: No 00:09:47.157 Volatile Memory Backup: OK 00:09:47.157 Current Temperature: 323 Kelvin (50 Celsius) 00:09:47.157 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:47.157 Available Spare: 0% 00:09:47.157 Available Spare Threshold: 0% 00:09:47.157 Life Percentage Used: 0% 00:09:47.157 Data Units Read: 1237 00:09:47.157 Data Units Written: 1065 00:09:47.157 Host Read Commands: 56867 00:09:47.157 Host Write Commands: 55310 00:09:47.157 Controller Busy Time: 0 minutes 00:09:47.157 Power Cycles: 0 00:09:47.157 Power On Hours: 0 hours 00:09:47.157 Unsafe Shutdowns: 0 00:09:47.157 Unrecoverable Media Errors: 0 00:09:47.157 Lifetime Error Log Entries: 0 00:09:47.157 Warning Temperature Time: 0 minutes 00:09:47.157 Critical Temperature Time: 0 minutes 00:09:47.157 00:09:47.157 Number of Queues 00:09:47.157 ================ 00:09:47.157 Number of I/O Submission Queues: 64 00:09:47.157 Number of I/O Completion Queues: 64 00:09:47.157 00:09:47.157 ZNS Specific Controller Data 00:09:47.157 ============================ 00:09:47.157 Zone Append Size Limit: 0 00:09:47.157 00:09:47.157 00:09:47.157 Active Namespaces 00:09:47.157 ================= 00:09:47.157 Namespace ID:1 00:09:47.157 Error Recovery Timeout: Unlimited 00:09:47.157 Command Set Identifier: NVM (00h) 00:09:47.157 Deallocate: Supported 00:09:47.157 Deallocated/Unwritten Error: Supported 00:09:47.157 Deallocated Read Value: All 0x00 00:09:47.157 Deallocate in Write Zeroes: Not Supported 00:09:47.157 Deallocated Guard Field: 0xFFFF 00:09:47.157 Flush: Supported 00:09:47.157 Reservation: Not Supported 00:09:47.157 Metadata Transferred as: Separate Metadata Buffer 00:09:47.157 Namespace Sharing Capabilities: Private 00:09:47.157 Size (in LBAs): 1548666 (5GiB) 00:09:47.157 Capacity (in LBAs): 1548666 (5GiB) 00:09:47.157 Utilization (in LBAs): 1548666 (5GiB) 00:09:47.157 Thin Provisioning: Not Supported 00:09:47.157 Per-NS Atomic Units: No 00:09:47.157 Maximum Single Source Range Length: 128 00:09:47.157 Maximum Copy Length: 128 00:09:47.157 Maximum Source Range Count: 128 00:09:47.157 NGUID/EUI64 Never Reused: No 00:09:47.157 Namespace Write Protected: No 00:09:47.157 Number of LBA Formats: 8 00:09:47.157 Current LBA Format: LBA Format #07 00:09:47.157 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:47.157 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:47.157 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:47.157 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:47.157 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:47.157 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:47.157 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:47.157 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:47.157 00:09:47.157 ===================================================== 00:09:47.157 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:47.157 ===================================================== 00:09:47.157 Controller Capabilities/Features 00:09:47.157 ================================ 00:09:47.157 Vendor ID: 1b36 00:09:47.157 Subsystem Vendor ID: 1af4 00:09:47.157 Serial Number: 12341 00:09:47.157 Model Number: QEMU NVMe Ctrl 00:09:47.157 Firmware Version: 8.0.0 00:09:47.157 Recommended Arb Burst: 6 00:09:47.157 IEEE OUI Identifier: 00 54 52 00:09:47.157 Multi-path I/O 00:09:47.157 May have multiple subsystem ports: No 00:09:47.157 May have multiple controllers: No 00:09:47.157 Associated with SR-IOV VF: No 00:09:47.157 Max Data Transfer Size: 524288 00:09:47.157 Max Number of Namespaces: 256 00:09:47.157 Max Number of I/O Queues: 64 00:09:47.157 NVMe Specification Version (VS): 1.4 00:09:47.157 NVMe Specification Version (Identify): 1.4 00:09:47.157 Maximum Queue Entries: 2048 00:09:47.157 Contiguous Queues Required: Yes 00:09:47.157 Arbitration Mechanisms Supported 00:09:47.157 Weighted Round Robin: Not Supported 00:09:47.157 Vendor Specific: Not Supported 00:09:47.157 Reset Timeout: 7500 ms 00:09:47.157 Doorbell Stride: 4 bytes 00:09:47.157 NVM Subsystem Reset: Not Supported 00:09:47.157 Command Sets Supported 00:09:47.157 NVM Command Set: Supported 00:09:47.158 Boot Partition: Not Supported 00:09:47.158 Memory Page Size Minimum: 4096 bytes 00:09:47.158 Memory Page Size Maximum: 65536 bytes 00:09:47.158 Persistent Memory Region: Not Supported 00:09:47.158 Optional Asynchronous Events Supported 00:09:47.158 Namespace Attribute Notices: Supported 00:09:47.158 Firmware Activation Notices: Not Supported 00:09:47.158 ANA Change Notices: Not Supported 00:09:47.158 PLE Aggregate Log Change Notices: Not Supported 00:09:47.158 LBA Status Info Alert Notices: Not Supported 00:09:47.158 EGE Aggregate Log Change Notices: Not Supported 00:09:47.158 Normal NVM Subsystem Shutdown event: Not Supported 00:09:47.158 Zone Descriptor Change Notices: Not Supported 00:09:47.158 Discovery Log Change Notices: Not Supported 00:09:47.158 Controller Attributes 00:09:47.158 128-bit Host Identifier: Not Supported 00:09:47.158 Non-Operational Permissive Mode: Not Supported 00:09:47.158 NVM Sets: Not Supported 00:09:47.158 Read Recovery Levels: Not Supported 00:09:47.158 Endurance Groups: Not Supported 00:09:47.158 Predictable Latency Mode: Not Supported 00:09:47.158 Traffic Based Keep ALive: Not Supported 00:09:47.158 Namespace Granularity: Not Supported 00:09:47.158 SQ Associations: Not Supported 00:09:47.158 UUID List: Not Supported 00:09:47.158 Multi-Domain Subsystem: Not Supported 00:09:47.158 Fixed Capacity Management: Not Supported 00:09:47.158 Variable Capacity Management: Not Supported 00:09:47.158 Delete Endurance Group: Not Supported 00:09:47.158 Delete NVM Set: Not Supported 00:09:47.158 Extended LBA Formats Supported: Supported 00:09:47.158 Flexible Data Placement Supported: Not Supported 00:09:47.158 00:09:47.158 Controller Memory Buffer Support 00:09:47.158 ================================ 00:09:47.158 Supported: No 00:09:47.158 00:09:47.158 Persistent Memory Region Support 00:09:47.158 ================================ 00:09:47.158 Supported: No 00:09:47.158 00:09:47.158 Admin Command Set Attributes 00:09:47.158 ============================ 00:09:47.158 Security Send/Receive: Not Supported 00:09:47.158 Format NVM: Supported 00:09:47.158 Firmware Activate/Download: Not Supported 00:09:47.158 Namespace Management: Supported 00:09:47.158 Device Self-Test: Not Supported 00:09:47.158 Directives: Supported 00:09:47.158 NVMe-MI: Not Supported 00:09:47.158 Virtualization Management: Not Supported 00:09:47.158 Doorbell Buffer Config: Supported 00:09:47.158 Get LBA Status Capability: Not Supported 00:09:47.158 Command & Feature Lockdown Capability: Not Supported 00:09:47.158 Abort Command Limit: 4 00:09:47.158 Async Event Request Limit: 4 00:09:47.158 Number of Firmware Slots: N/A 00:09:47.158 Firmware Slot 1 Read-Only: N/A 00:09:47.158 Firmware Activation Without Reset: N/A 00:09:47.158 Multiple Update Detection Support: N/A 00:09:47.158 Firmware Update Granularity: No Information Provided 00:09:47.158 Per-Namespace SMART Log: Yes 00:09:47.158 Asymmetric Namespace Access Log Page: Not Supported 00:09:47.158 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:47.158 Command Effects Log Page: Supported 00:09:47.158 Get Log Page Extended Data: Supported 00:09:47.158 Telemetry Log Pages: Not Supported 00:09:47.158 Persistent Event Log Pages: Not Supported 00:09:47.158 Supported Log Pages Log Page: May Support 00:09:47.158 Commands Supported & Effects Log Page: Not Supported 00:09:47.158 Feature Identifiers & Effects Log Page:May Support 00:09:47.158 NVMe-MI Commands & Effects Log Page: May Support 00:09:47.158 Data Area 4 for Telemetry Log: Not Supported 00:09:47.158 Error Log Page Entries Supported: 1 00:09:47.158 Keep Alive: Not Supported 00:09:47.158 00:09:47.158 NVM Command Set Attributes 00:09:47.158 ========================== 00:09:47.158 Submission Queue Entry Size 00:09:47.158 Max: 64 00:09:47.158 Min: 64 00:09:47.158 Completion Queue Entry Size 00:09:47.158 Max: 16 00:09:47.158 Min: 16 00:09:47.158 Number of Namespaces: 256 00:09:47.158 Compare Command: Supported 00:09:47.158 Write Uncorrectable Command: Not Supported 00:09:47.158 Dataset Management Command: Supported 00:09:47.158 Write Zeroes Command: Supported 00:09:47.158 Set Features Save Field: Supported 00:09:47.158 Reservations: Not Supported 00:09:47.158 Timestamp: Supported 00:09:47.158 Copy: Supported 00:09:47.158 Volatile Write Cache: Present 00:09:47.158 Atomic Write Unit (Normal): 1 00:09:47.158 Atomic Write Unit (PFail): 1 00:09:47.158 Atomic Compare & Write Unit: 1 00:09:47.158 Fused Compare & Write: Not Supported 00:09:47.158 Scatter-Gather List 00:09:47.158 SGL Command Set: Supported 00:09:47.158 SGL Keyed: Not Supported 00:09:47.158 SGL Bit Bucket Descriptor: Not Supported 00:09:47.158 SGL Metadata Pointer: Not Supported 00:09:47.158 Oversized SGL: Not Supported 00:09:47.158 SGL Metadata Address: Not Supported 00:09:47.158 SGL Offset: Not Supported 00:09:47.158 Transport SGL Data Block: Not Supported 00:09:47.158 Replay Protected Memory Block: Not Supported 00:09:47.158 00:09:47.158 Firmware Slot Information 00:09:47.158 ========================= 00:09:47.158 Active slot: 1 00:09:47.158 Slot 1 Firmware Revision: 1.0 00:09:47.158 00:09:47.158 00:09:47.158 Commands Supported and Effects 00:09:47.158 ============================== 00:09:47.158 Admin Commands 00:09:47.158 -------------- 00:09:47.158 Delete I/O Submission Queue (00h): Supported 00:09:47.158 Create I/O Submission Queue (01h): Supported 00:09:47.158 Get Log Page (02h): Supported 00:09:47.158 Delete I/O Completion Queue (04h): Supported 00:09:47.158 Create I/O Completion Queue (05h): Supported 00:09:47.158 Identify (06h): Supported 00:09:47.158 Abort (08h): Supported 00:09:47.158 Set Features (09h): Supported 00:09:47.158 Get Features (0Ah): Supported 00:09:47.158 Asynchronous Event Request (0Ch): Supported 00:09:47.158 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:47.158 Directive Send (19h): Supported 00:09:47.158 Directive Receive (1Ah): Supported 00:09:47.158 Virtualization Management (1Ch): Supported 00:09:47.158 Doorbell Buffer Config (7Ch): Supported 00:09:47.158 Format NVM (80h): Supported LBA-Change 00:09:47.158 I/O Commands 00:09:47.158 ------------ 00:09:47.158 Flush (00h): Supported LBA-Change 00:09:47.158 Write (01h): Supported LBA-Change 00:09:47.158 Read (02h): Supported 00:09:47.158 Compare (05h): Supported 00:09:47.158 Write Zeroes (08h): Supported LBA-Change 00:09:47.158 Dataset Management (09h): Supported LBA-Change 00:09:47.158 Unknown (0Ch): Supported 00:09:47.158 Unknown (12h): Supported 00:09:47.158 Copy (19h): Supported LBA-Change 00:09:47.158 Unknown (1Dh): Supported LBA-Change 00:09:47.158 00:09:47.158 Error Log 00:09:47.158 ========= 00:09:47.158 00:09:47.158 Arbitration 00:09:47.158 =========== 00:09:47.158 Arbitration Burst: no limit 00:09:47.158 00:09:47.158 Power Management 00:09:47.158 ================ 00:09:47.158 Number of Power States: 1 00:09:47.158 Current Power State: Power State #0 00:09:47.158 Power State #0: 00:09:47.159 Max Power: 25.00 W 00:09:47.159 Non-Operational State: Operational 00:09:47.159 Entry Latency: 16 microseconds 00:09:47.159 Exit Latency: 4 microseconds 00:09:47.159 Relative Read Throughput: 0 00:09:47.159 Relative Read Latency: 0 00:09:47.159 Relative Write Throughput: 0 00:09:47.159 Relative Write Latency: 0 00:09:47.159 Idle Power: Not Reported 00:09:47.159 Active Power: Not Reported 00:09:47.159 Non-Operational Permissive Mode: Not Supported 00:09:47.159 00:09:47.159 Health Information 00:09:47.159 ================== 00:09:47.159 Critical Warnings: 00:09:47.159 Available Spare Space: OK 00:09:47.159 Temperature: OK 00:09:47.159 Device Reliability: OK 00:09:47.159 Read Only: No 00:09:47.159 Volatile Memory Backup: OK 00:09:47.159 Current Temperature: 323 Kelvin (50 Celsius) 00:09:47.159 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:47.159 Available Spare: 0% 00:09:47.159 Available Spare Threshold: 0% 00:09:47.159 Life Percentage Used: 0% 00:09:47.159 Data Units Read: 920 00:09:47.159 Data Units Written: 768 00:09:47.159 Host Read Commands: 40604 00:09:47.159 Host Write Commands: 38333 00:09:47.159 Controller Busy Time: 0 minutes 00:09:47.159 Power Cycles: 0 00:09:47.159 Power On Hours: 0 hours 00:09:47.159 Unsafe Shutdowns: 0 00:09:47.159 Unrecoverable Media Errors: 0 00:09:47.159 Lifetime Error Log Entries: 0 00:09:47.159 Warning Temperature Time: 0 minutes 00:09:47.159 Critical Temperature Time: 0 minutes 00:09:47.159 00:09:47.159 Number of Queues 00:09:47.159 ================ 00:09:47.159 Number of I/O Submission Queues: 64 00:09:47.159 Number of I/O Completion Queues: 64 00:09:47.159 00:09:47.159 ZNS Specific Controller Data 00:09:47.159 ============================ 00:09:47.159 Zone Append Size Limit: 0 00:09:47.159 00:09:47.159 00:09:47.159 Active Namespaces 00:09:47.159 ================= 00:09:47.159 Namespace ID:1 00:09:47.159 Error Recovery Timeout: Unlimited 00:09:47.159 Command Set Identifier: [2024-07-20 15:45:21.859861] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 80079 terminated unexpected 00:09:47.159 NVM (00h) 00:09:47.159 Deallocate: Supported 00:09:47.159 Deallocated/Unwritten Error: Supported 00:09:47.159 Deallocated Read Value: All 0x00 00:09:47.159 Deallocate in Write Zeroes: Not Supported 00:09:47.159 Deallocated Guard Field: 0xFFFF 00:09:47.159 Flush: Supported 00:09:47.159 Reservation: Not Supported 00:09:47.159 Namespace Sharing Capabilities: Private 00:09:47.159 Size (in LBAs): 1310720 (5GiB) 00:09:47.159 Capacity (in LBAs): 1310720 (5GiB) 00:09:47.159 Utilization (in LBAs): 1310720 (5GiB) 00:09:47.159 Thin Provisioning: Not Supported 00:09:47.159 Per-NS Atomic Units: No 00:09:47.159 Maximum Single Source Range Length: 128 00:09:47.159 Maximum Copy Length: 128 00:09:47.159 Maximum Source Range Count: 128 00:09:47.159 NGUID/EUI64 Never Reused: No 00:09:47.159 Namespace Write Protected: No 00:09:47.159 Number of LBA Formats: 8 00:09:47.159 Current LBA Format: LBA Format #04 00:09:47.159 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:47.159 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:47.159 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:47.159 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:47.159 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:47.159 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:47.159 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:47.159 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:47.159 00:09:47.159 ===================================================== 00:09:47.159 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:47.159 ===================================================== 00:09:47.159 Controller Capabilities/Features 00:09:47.159 ================================ 00:09:47.159 Vendor ID: 1b36 00:09:47.159 Subsystem Vendor ID: 1af4 00:09:47.159 Serial Number: 12343 00:09:47.159 Model Number: QEMU NVMe Ctrl 00:09:47.159 Firmware Version: 8.0.0 00:09:47.159 Recommended Arb Burst: 6 00:09:47.159 IEEE OUI Identifier: 00 54 52 00:09:47.159 Multi-path I/O 00:09:47.159 May have multiple subsystem ports: No 00:09:47.159 May have multiple controllers: Yes 00:09:47.159 Associated with SR-IOV VF: No 00:09:47.159 Max Data Transfer Size: 524288 00:09:47.159 Max Number of Namespaces: 256 00:09:47.159 Max Number of I/O Queues: 64 00:09:47.159 NVMe Specification Version (VS): 1.4 00:09:47.159 NVMe Specification Version (Identify): 1.4 00:09:47.159 Maximum Queue Entries: 2048 00:09:47.159 Contiguous Queues Required: Yes 00:09:47.159 Arbitration Mechanisms Supported 00:09:47.159 Weighted Round Robin: Not Supported 00:09:47.159 Vendor Specific: Not Supported 00:09:47.159 Reset Timeout: 7500 ms 00:09:47.159 Doorbell Stride: 4 bytes 00:09:47.159 NVM Subsystem Reset: Not Supported 00:09:47.159 Command Sets Supported 00:09:47.159 NVM Command Set: Supported 00:09:47.159 Boot Partition: Not Supported 00:09:47.159 Memory Page Size Minimum: 4096 bytes 00:09:47.159 Memory Page Size Maximum: 65536 bytes 00:09:47.159 Persistent Memory Region: Not Supported 00:09:47.159 Optional Asynchronous Events Supported 00:09:47.159 Namespace Attribute Notices: Supported 00:09:47.159 Firmware Activation Notices: Not Supported 00:09:47.159 ANA Change Notices: Not Supported 00:09:47.159 PLE Aggregate Log Change Notices: Not Supported 00:09:47.159 LBA Status Info Alert Notices: Not Supported 00:09:47.159 EGE Aggregate Log Change Notices: Not Supported 00:09:47.159 Normal NVM Subsystem Shutdown event: Not Supported 00:09:47.159 Zone Descriptor Change Notices: Not Supported 00:09:47.159 Discovery Log Change Notices: Not Supported 00:09:47.159 Controller Attributes 00:09:47.159 128-bit Host Identifier: Not Supported 00:09:47.159 Non-Operational Permissive Mode: Not Supported 00:09:47.159 NVM Sets: Not Supported 00:09:47.159 Read Recovery Levels: Not Supported 00:09:47.159 Endurance Groups: Supported 00:09:47.159 Predictable Latency Mode: Not Supported 00:09:47.159 Traffic Based Keep ALive: Not Supported 00:09:47.159 Namespace Granularity: Not Supported 00:09:47.159 SQ Associations: Not Supported 00:09:47.159 UUID List: Not Supported 00:09:47.159 Multi-Domain Subsystem: Not Supported 00:09:47.159 Fixed Capacity Management: Not Supported 00:09:47.159 Variable Capacity Management: Not Supported 00:09:47.159 Delete Endurance Group: Not Supported 00:09:47.159 Delete NVM Set: Not Supported 00:09:47.159 Extended LBA Formats Supported: Supported 00:09:47.159 Flexible Data Placement Supported: Supported 00:09:47.159 00:09:47.159 Controller Memory Buffer Support 00:09:47.159 ================================ 00:09:47.159 Supported: No 00:09:47.159 00:09:47.159 Persistent Memory Region Support 00:09:47.159 ================================ 00:09:47.160 Supported: No 00:09:47.160 00:09:47.160 Admin Command Set Attributes 00:09:47.160 ============================ 00:09:47.160 Security Send/Receive: Not Supported 00:09:47.160 Format NVM: Supported 00:09:47.160 Firmware Activate/Download: Not Supported 00:09:47.160 Namespace Management: Supported 00:09:47.160 Device Self-Test: Not Supported 00:09:47.160 Directives: Supported 00:09:47.160 NVMe-MI: Not Supported 00:09:47.160 Virtualization Management: Not Supported 00:09:47.160 Doorbell Buffer Config: Supported 00:09:47.160 Get LBA Status Capability: Not Supported 00:09:47.160 Command & Feature Lockdown Capability: Not Supported 00:09:47.160 Abort Command Limit: 4 00:09:47.160 Async Event Request Limit: 4 00:09:47.160 Number of Firmware Slots: N/A 00:09:47.160 Firmware Slot 1 Read-Only: N/A 00:09:47.160 Firmware Activation Without Reset: N/A 00:09:47.160 Multiple Update Detection Support: N/A 00:09:47.160 Firmware Update Granularity: No Information Provided 00:09:47.160 Per-Namespace SMART Log: Yes 00:09:47.160 Asymmetric Namespace Access Log Page: Not Supported 00:09:47.160 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:47.160 Command Effects Log Page: Supported 00:09:47.160 Get Log Page Extended Data: Supported 00:09:47.160 Telemetry Log Pages: Not Supported 00:09:47.160 Persistent Event Log Pages: Not Supported 00:09:47.160 Supported Log Pages Log Page: May Support 00:09:47.160 Commands Supported & Effects Log Page: Not Supported 00:09:47.160 Feature Identifiers & Effects Log Page:May Support 00:09:47.160 NVMe-MI Commands & Effects Log Page: May Support 00:09:47.160 Data Area 4 for Telemetry Log: Not Supported 00:09:47.160 Error Log Page Entries Supported: 1 00:09:47.160 Keep Alive: Not Supported 00:09:47.160 00:09:47.160 NVM Command Set Attributes 00:09:47.160 ========================== 00:09:47.160 Submission Queue Entry Size 00:09:47.160 Max: 64 00:09:47.160 Min: 64 00:09:47.160 Completion Queue Entry Size 00:09:47.160 Max: 16 00:09:47.160 Min: 16 00:09:47.160 Number of Namespaces: 256 00:09:47.160 Compare Command: Supported 00:09:47.160 Write Uncorrectable Command: Not Supported 00:09:47.160 Dataset Management Command: Supported 00:09:47.160 Write Zeroes Command: Supported 00:09:47.160 Set Features Save Field: Supported 00:09:47.160 Reservations: Not Supported 00:09:47.160 Timestamp: Supported 00:09:47.160 Copy: Supported 00:09:47.160 Volatile Write Cache: Present 00:09:47.160 Atomic Write Unit (Normal): 1 00:09:47.160 Atomic Write Unit (PFail): 1 00:09:47.160 Atomic Compare & Write Unit: 1 00:09:47.160 Fused Compare & Write: Not Supported 00:09:47.160 Scatter-Gather List 00:09:47.160 SGL Command Set: Supported 00:09:47.160 SGL Keyed: Not Supported 00:09:47.160 SGL Bit Bucket Descriptor: Not Supported 00:09:47.160 SGL Metadata Pointer: Not Supported 00:09:47.160 Oversized SGL: Not Supported 00:09:47.160 SGL Metadata Address: Not Supported 00:09:47.160 SGL Offset: Not Supported 00:09:47.160 Transport SGL Data Block: Not Supported 00:09:47.160 Replay Protected Memory Block: Not Supported 00:09:47.160 00:09:47.160 Firmware Slot Information 00:09:47.160 ========================= 00:09:47.160 Active slot: 1 00:09:47.160 Slot 1 Firmware Revision: 1.0 00:09:47.160 00:09:47.160 00:09:47.160 Commands Supported and Effects 00:09:47.160 ============================== 00:09:47.160 Admin Commands 00:09:47.160 -------------- 00:09:47.160 Delete I/O Submission Queue (00h): Supported 00:09:47.160 Create I/O Submission Queue (01h): Supported 00:09:47.160 Get Log Page (02h): Supported 00:09:47.160 Delete I/O Completion Queue (04h): Supported 00:09:47.160 Create I/O Completion Queue (05h): Supported 00:09:47.160 Identify (06h): Supported 00:09:47.160 Abort (08h): Supported 00:09:47.160 Set Features (09h): Supported 00:09:47.160 Get Features (0Ah): Supported 00:09:47.160 Asynchronous Event Request (0Ch): Supported 00:09:47.160 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:47.160 Directive Send (19h): Supported 00:09:47.160 Directive Receive (1Ah): Supported 00:09:47.160 Virtualization Management (1Ch): Supported 00:09:47.160 Doorbell Buffer Config (7Ch): Supported 00:09:47.160 Format NVM (80h): Supported LBA-Change 00:09:47.160 I/O Commands 00:09:47.160 ------------ 00:09:47.160 Flush (00h): Supported LBA-Change 00:09:47.160 Write (01h): Supported LBA-Change 00:09:47.160 Read (02h): Supported 00:09:47.160 Compare (05h): Supported 00:09:47.160 Write Zeroes (08h): Supported LBA-Change 00:09:47.160 Dataset Management (09h): Supported LBA-Change 00:09:47.160 Unknown (0Ch): Supported 00:09:47.160 Unknown (12h): Supported 00:09:47.160 Copy (19h): Supported LBA-Change 00:09:47.160 Unknown (1Dh): Supported LBA-Change 00:09:47.160 00:09:47.160 Error Log 00:09:47.160 ========= 00:09:47.160 00:09:47.160 Arbitration 00:09:47.160 =========== 00:09:47.160 Arbitration Burst: no limit 00:09:47.160 00:09:47.160 Power Management 00:09:47.160 ================ 00:09:47.160 Number of Power States: 1 00:09:47.160 Current Power State: Power State #0 00:09:47.160 Power State #0: 00:09:47.160 Max Power: 25.00 W 00:09:47.160 Non-Operational State: Operational 00:09:47.160 Entry Latency: 16 microseconds 00:09:47.160 Exit Latency: 4 microseconds 00:09:47.160 Relative Read Throughput: 0 00:09:47.160 Relative Read Latency: 0 00:09:47.160 Relative Write Throughput: 0 00:09:47.160 Relative Write Latency: 0 00:09:47.160 Idle Power: Not Reported 00:09:47.160 Active Power: Not Reported 00:09:47.160 Non-Operational Permissive Mode: Not Supported 00:09:47.160 00:09:47.160 Health Information 00:09:47.160 ================== 00:09:47.160 Critical Warnings: 00:09:47.160 Available Spare Space: OK 00:09:47.160 Temperature: OK 00:09:47.160 Device Reliability: OK 00:09:47.160 Read Only: No 00:09:47.160 Volatile Memory Backup: OK 00:09:47.160 Current Temperature: 323 Kelvin (50 Celsius) 00:09:47.160 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:47.160 Available Spare: 0% 00:09:47.160 Available Spare Threshold: 0% 00:09:47.160 Life Percentage Used: 0% 00:09:47.160 Data Units Read: 916 00:09:47.160 Data Units Written: 810 00:09:47.160 Host Read Commands: 40252 00:09:47.160 Host Write Commands: 38842 00:09:47.160 Controller Busy Time: 0 minutes 00:09:47.160 Power Cycles: 0 00:09:47.160 Power On Hours: 0 hours 00:09:47.160 Unsafe Shutdowns: 0 00:09:47.160 Unrecoverable Media Errors: 0 00:09:47.160 Lifetime Error Log Entries: 0 00:09:47.160 Warning Temperature Time: 0 minutes 00:09:47.160 Critical Temperature Time: 0 minutes 00:09:47.160 00:09:47.160 Number of Queues 00:09:47.160 ================ 00:09:47.160 Number of I/O Submission Queues: 64 00:09:47.160 Number of I/O Completion Queues: 64 00:09:47.160 00:09:47.160 ZNS Specific Controller Data 00:09:47.160 ============================ 00:09:47.161 Zone Append Size Limit: 0 00:09:47.161 00:09:47.161 00:09:47.161 Active Namespaces 00:09:47.161 ================= 00:09:47.161 Namespace ID:1 00:09:47.161 Error Recovery Timeout: Unlimited 00:09:47.161 Command Set Identifier: NVM (00h) 00:09:47.161 Deallocate: Supported 00:09:47.161 Deallocated/Unwritten Error: Supported 00:09:47.161 Deallocated Read Value: All 0x00 00:09:47.161 Deallocate in Write Zeroes: Not Supported 00:09:47.161 Deallocated Guard Field: 0xFFFF 00:09:47.161 Flush: Supported 00:09:47.161 Reservation: Not Supported 00:09:47.161 Namespace Sharing Capabilities: Multiple Controllers 00:09:47.161 Size (in LBAs): 262144 (1GiB) 00:09:47.161 Capacity (in LBAs): 262144 (1GiB) 00:09:47.161 Utilization (in LBAs): 262144 (1GiB) 00:09:47.161 Thin Provisioning: Not Supported 00:09:47.161 Per-NS Atomic Units: No 00:09:47.161 Maximum Single Source Range Length: 128 00:09:47.161 Maximum Copy Length: 128 00:09:47.161 Maximum Source Range Count: 128 00:09:47.161 NGUID/EUI64 Never Reused: No 00:09:47.161 Namespace Write Protected: No 00:09:47.161 Endurance group ID: 1 00:09:47.161 Number of LBA Formats: 8 00:09:47.161 Current LBA Format: LBA Format #04 00:09:47.161 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:47.161 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:47.161 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:47.161 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:47.161 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:47.161 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:47.161 LBA Format #06: Data Siz[2024-07-20 15:45:21.861613] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 80079 terminated unexpected 00:09:47.161 e: 4096 Metadata Size: 16 00:09:47.161 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:47.161 00:09:47.161 Get Feature FDP: 00:09:47.161 ================ 00:09:47.161 Enabled: Yes 00:09:47.161 FDP configuration index: 0 00:09:47.161 00:09:47.161 FDP configurations log page 00:09:47.161 =========================== 00:09:47.161 Number of FDP configurations: 1 00:09:47.161 Version: 0 00:09:47.161 Size: 112 00:09:47.161 FDP Configuration Descriptor: 0 00:09:47.161 Descriptor Size: 96 00:09:47.161 Reclaim Group Identifier format: 2 00:09:47.161 FDP Volatile Write Cache: Not Present 00:09:47.161 FDP Configuration: Valid 00:09:47.161 Vendor Specific Size: 0 00:09:47.161 Number of Reclaim Groups: 2 00:09:47.161 Number of Recalim Unit Handles: 8 00:09:47.161 Max Placement Identifiers: 128 00:09:47.161 Number of Namespaces Suppprted: 256 00:09:47.161 Reclaim unit Nominal Size: 6000000 bytes 00:09:47.161 Estimated Reclaim Unit Time Limit: Not Reported 00:09:47.161 RUH Desc #000: RUH Type: Initially Isolated 00:09:47.161 RUH Desc #001: RUH Type: Initially Isolated 00:09:47.161 RUH Desc #002: RUH Type: Initially Isolated 00:09:47.161 RUH Desc #003: RUH Type: Initially Isolated 00:09:47.161 RUH Desc #004: RUH Type: Initially Isolated 00:09:47.161 RUH Desc #005: RUH Type: Initially Isolated 00:09:47.161 RUH Desc #006: RUH Type: Initially Isolated 00:09:47.161 RUH Desc #007: RUH Type: Initially Isolated 00:09:47.161 00:09:47.161 FDP reclaim unit handle usage log page 00:09:47.161 ====================================== 00:09:47.161 Number of Reclaim Unit Handles: 8 00:09:47.161 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:47.161 RUH Usage Desc #001: RUH Attributes: Unused 00:09:47.161 RUH Usage Desc #002: RUH Attributes: Unused 00:09:47.161 RUH Usage Desc #003: RUH Attributes: Unused 00:09:47.161 RUH Usage Desc #004: RUH Attributes: Unused 00:09:47.161 RUH Usage Desc #005: RUH Attributes: Unused 00:09:47.161 RUH Usage Desc #006: RUH Attributes: Unused 00:09:47.161 RUH Usage Desc #007: RUH Attributes: Unused 00:09:47.161 00:09:47.161 FDP statistics log page 00:09:47.161 ======================= 00:09:47.161 Host bytes with metadata written: 528719872 00:09:47.161 Media bytes with metadata written: 528777216 00:09:47.161 Media bytes erased: 0 00:09:47.161 00:09:47.161 FDP events log page 00:09:47.161 =================== 00:09:47.161 Number of FDP events: 0 00:09:47.161 00:09:47.161 ===================================================== 00:09:47.161 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:47.161 ===================================================== 00:09:47.161 Controller Capabilities/Features 00:09:47.161 ================================ 00:09:47.161 Vendor ID: 1b36 00:09:47.161 Subsystem Vendor ID: 1af4 00:09:47.161 Serial Number: 12342 00:09:47.161 Model Number: QEMU NVMe Ctrl 00:09:47.161 Firmware Version: 8.0.0 00:09:47.161 Recommended Arb Burst: 6 00:09:47.161 IEEE OUI Identifier: 00 54 52 00:09:47.161 Multi-path I/O 00:09:47.161 May have multiple subsystem ports: No 00:09:47.161 May have multiple controllers: No 00:09:47.161 Associated with SR-IOV VF: No 00:09:47.161 Max Data Transfer Size: 524288 00:09:47.161 Max Number of Namespaces: 256 00:09:47.161 Max Number of I/O Queues: 64 00:09:47.161 NVMe Specification Version (VS): 1.4 00:09:47.161 NVMe Specification Version (Identify): 1.4 00:09:47.161 Maximum Queue Entries: 2048 00:09:47.161 Contiguous Queues Required: Yes 00:09:47.161 Arbitration Mechanisms Supported 00:09:47.161 Weighted Round Robin: Not Supported 00:09:47.161 Vendor Specific: Not Supported 00:09:47.161 Reset Timeout: 7500 ms 00:09:47.161 Doorbell Stride: 4 bytes 00:09:47.161 NVM Subsystem Reset: Not Supported 00:09:47.161 Command Sets Supported 00:09:47.161 NVM Command Set: Supported 00:09:47.161 Boot Partition: Not Supported 00:09:47.161 Memory Page Size Minimum: 4096 bytes 00:09:47.161 Memory Page Size Maximum: 65536 bytes 00:09:47.161 Persistent Memory Region: Not Supported 00:09:47.161 Optional Asynchronous Events Supported 00:09:47.161 Namespace Attribute Notices: Supported 00:09:47.161 Firmware Activation Notices: Not Supported 00:09:47.161 ANA Change Notices: Not Supported 00:09:47.161 PLE Aggregate Log Change Notices: Not Supported 00:09:47.161 LBA Status Info Alert Notices: Not Supported 00:09:47.161 EGE Aggregate Log Change Notices: Not Supported 00:09:47.161 Normal NVM Subsystem Shutdown event: Not Supported 00:09:47.161 Zone Descriptor Change Notices: Not Supported 00:09:47.161 Discovery Log Change Notices: Not Supported 00:09:47.161 Controller Attributes 00:09:47.161 128-bit Host Identifier: Not Supported 00:09:47.161 Non-Operational Permissive Mode: Not Supported 00:09:47.161 NVM Sets: Not Supported 00:09:47.161 Read Recovery Levels: Not Supported 00:09:47.161 Endurance Groups: Not Supported 00:09:47.161 Predictable Latency Mode: Not Supported 00:09:47.161 Traffic Based Keep ALive: Not Supported 00:09:47.161 Namespace Granularity: Not Supported 00:09:47.161 SQ Associations: Not Supported 00:09:47.161 UUID List: Not Supported 00:09:47.161 Multi-Domain Subsystem: Not Supported 00:09:47.161 Fixed Capacity Management: Not Supported 00:09:47.161 Variable Capacity Management: Not Supported 00:09:47.161 Delete Endurance Group: Not Supported 00:09:47.161 Delete NVM Set: Not Supported 00:09:47.161 Extended LBA Formats Supported: Supported 00:09:47.161 Flexible Data Placement Supported: Not Supported 00:09:47.161 00:09:47.161 Controller Memory Buffer Support 00:09:47.161 ================================ 00:09:47.161 Supported: No 00:09:47.161 00:09:47.162 Persistent Memory Region Support 00:09:47.162 ================================ 00:09:47.162 Supported: No 00:09:47.162 00:09:47.162 Admin Command Set Attributes 00:09:47.162 ============================ 00:09:47.162 Security Send/Receive: Not Supported 00:09:47.162 Format NVM: Supported 00:09:47.162 Firmware Activate/Download: Not Supported 00:09:47.162 Namespace Management: Supported 00:09:47.162 Device Self-Test: Not Supported 00:09:47.162 Directives: Supported 00:09:47.162 NVMe-MI: Not Supported 00:09:47.162 Virtualization Management: Not Supported 00:09:47.162 Doorbell Buffer Config: Supported 00:09:47.162 Get LBA Status Capability: Not Supported 00:09:47.162 Command & Feature Lockdown Capability: Not Supported 00:09:47.162 Abort Command Limit: 4 00:09:47.162 Async Event Request Limit: 4 00:09:47.162 Number of Firmware Slots: N/A 00:09:47.162 Firmware Slot 1 Read-Only: N/A 00:09:47.163 Firmware Activation Without Reset: N/A 00:09:47.163 Multiple Update Detection Support: N/A 00:09:47.163 Firmware Update Granularity: No Information Provided 00:09:47.163 Per-Namespace SMART Log: Yes 00:09:47.163 Asymmetric Namespace Access Log Page: Not Supported 00:09:47.163 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:47.163 Command Effects Log Page: Supported 00:09:47.163 Get Log Page Extended Data: Supported 00:09:47.163 Telemetry Log Pages: Not Supported 00:09:47.163 Persistent Event Log Pages: Not Supported 00:09:47.163 Supported Log Pages Log Page: May Support 00:09:47.163 Commands Supported & Effects Log Page: Not Supported 00:09:47.163 Feature Identifiers & Effects Log Page:May Support 00:09:47.163 NVMe-MI Commands & Effects Log Page: May Support 00:09:47.163 Data Area 4 for Telemetry Log: Not Supported 00:09:47.163 Error Log Page Entries Supported: 1 00:09:47.163 Keep Alive: Not Supported 00:09:47.163 00:09:47.163 NVM Command Set Attributes 00:09:47.163 ========================== 00:09:47.163 Submission Queue Entry Size 00:09:47.163 Max: 64 00:09:47.163 Min: 64 00:09:47.163 Completion Queue Entry Size 00:09:47.163 Max: 16 00:09:47.163 Min: 16 00:09:47.163 Number of Namespaces: 256 00:09:47.163 Compare Command: Supported 00:09:47.163 Write Uncorrectable Command: Not Supported 00:09:47.163 Dataset Management Command: Supported 00:09:47.163 Write Zeroes Command: Supported 00:09:47.163 Set Features Save Field: Supported 00:09:47.163 Reservations: Not Supported 00:09:47.163 Timestamp: Supported 00:09:47.163 Copy: Supported 00:09:47.163 Volatile Write Cache: Present 00:09:47.163 Atomic Write Unit (Normal): 1 00:09:47.163 Atomic Write Unit (PFail): 1 00:09:47.163 Atomic Compare & Write Unit: 1 00:09:47.163 Fused Compare & Write: Not Supported 00:09:47.163 Scatter-Gather List 00:09:47.163 SGL Command Set: Supported 00:09:47.163 SGL Keyed: Not Supported 00:09:47.163 SGL Bit Bucket Descriptor: Not Supported 00:09:47.163 SGL Metadata Pointer: Not Supported 00:09:47.163 Oversized SGL: Not Supported 00:09:47.163 SGL Metadata Address: Not Supported 00:09:47.163 SGL Offset: Not Supported 00:09:47.163 Transport SGL Data Block: Not Supported 00:09:47.163 Replay Protected Memory Block: Not Supported 00:09:47.163 00:09:47.163 Firmware Slot Information 00:09:47.163 ========================= 00:09:47.163 Active slot: 1 00:09:47.163 Slot 1 Firmware Revision: 1.0 00:09:47.163 00:09:47.163 00:09:47.163 Commands Supported and Effects 00:09:47.163 ============================== 00:09:47.163 Admin Commands 00:09:47.163 -------------- 00:09:47.163 Delete I/O Submission Queue (00h): Supported 00:09:47.163 Create I/O Submission Queue (01h): Supported 00:09:47.163 Get Log Page (02h): Supported 00:09:47.163 Delete I/O Completion Queue (04h): Supported 00:09:47.163 Create I/O Completion Queue (05h): Supported 00:09:47.163 Identify (06h): Supported 00:09:47.163 Abort (08h): Supported 00:09:47.163 Set Features (09h): Supported 00:09:47.163 Get Features (0Ah): Supported 00:09:47.163 Asynchronous Event Request (0Ch): Supported 00:09:47.163 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:47.163 Directive Send (19h): Supported 00:09:47.163 Directive Receive (1Ah): Supported 00:09:47.163 Virtualization Management (1Ch): Supported 00:09:47.163 Doorbell Buffer Config (7Ch): Supported 00:09:47.163 Format NVM (80h): Supported LBA-Change 00:09:47.163 I/O Commands 00:09:47.163 ------------ 00:09:47.163 Flush (00h): Supported LBA-Change 00:09:47.163 Write (01h): Supported LBA-Change 00:09:47.163 Read (02h): Supported 00:09:47.163 Compare (05h): Supported 00:09:47.163 Write Zeroes (08h): Supported LBA-Change 00:09:47.163 Dataset Management (09h): Supported LBA-Change 00:09:47.163 Unknown (0Ch): Supported 00:09:47.163 Unknown (12h): Supported 00:09:47.163 Copy (19h): Supported LBA-Change 00:09:47.163 Unknown (1Dh): Supported LBA-Change 00:09:47.163 00:09:47.163 Error Log 00:09:47.163 ========= 00:09:47.163 00:09:47.163 Arbitration 00:09:47.163 =========== 00:09:47.163 Arbitration Burst: no limit 00:09:47.163 00:09:47.163 Power Management 00:09:47.163 ================ 00:09:47.163 Number of Power States: 1 00:09:47.163 Current Power State: Power State #0 00:09:47.163 Power State #0: 00:09:47.163 Max Power: 25.00 W 00:09:47.163 Non-Operational State: Operational 00:09:47.163 Entry Latency: 16 microseconds 00:09:47.163 Exit Latency: 4 microseconds 00:09:47.163 Relative Read Throughput: 0 00:09:47.163 Relative Read Latency: 0 00:09:47.163 Relative Write Throughput: 0 00:09:47.163 Relative Write Latency: 0 00:09:47.163 Idle Power: Not Reported 00:09:47.163 Active Power: Not Reported 00:09:47.164 Non-Operational Permissive Mode: Not Supported 00:09:47.164 00:09:47.164 Health Information 00:09:47.164 ================== 00:09:47.164 Critical Warnings: 00:09:47.164 Available Spare Space: OK 00:09:47.164 Temperature: OK 00:09:47.164 Device Reliability: OK 00:09:47.164 Read Only: No 00:09:47.164 Volatile Memory Backup: OK 00:09:47.164 Current Temperature: 323 Kelvin (50 Celsius) 00:09:47.164 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:47.164 Available Spare: 0% 00:09:47.164 Available Spare Threshold: 0% 00:09:47.164 Life Percentage Used: 0% 00:09:47.164 Data Units Read: 2599 00:09:47.164 Data Units Written: 2280 00:09:47.164 Host Read Commands: 119051 00:09:47.164 Host Write Commands: 114821 00:09:47.164 Controller Busy Time: 0 minutes 00:09:47.164 Power Cycles: 0 00:09:47.164 Power On Hours: 0 hours 00:09:47.164 Unsafe Shutdowns: 0 00:09:47.164 Unrecoverable Media Errors: 0 00:09:47.164 Lifetime Error Log Entries: 0 00:09:47.164 Warning Temperature Time: 0 minutes 00:09:47.164 Critical Temperature Time: 0 minutes 00:09:47.164 00:09:47.164 Number of Queues 00:09:47.164 ================ 00:09:47.164 Number of I/O Submission Queues: 64 00:09:47.164 Number of I/O Completion Queues: 64 00:09:47.164 00:09:47.164 ZNS Specific Controller Data 00:09:47.164 ============================ 00:09:47.164 Zone Append Size Limit: 0 00:09:47.164 00:09:47.164 00:09:47.164 Active Namespaces 00:09:47.164 ================= 00:09:47.164 Namespace ID:1 00:09:47.164 Error Recovery Timeout: Unlimited 00:09:47.164 Command Set Identifier: NVM (00h) 00:09:47.164 Deallocate: Supported 00:09:47.164 Deallocated/Unwritten Error: Supported 00:09:47.164 Deallocated Read Value: All 0x00 00:09:47.164 Deallocate in Write Zeroes: Not Supported 00:09:47.164 Deallocated Guard Field: 0xFFFF 00:09:47.164 Flush: Supported 00:09:47.164 Reservation: Not Supported 00:09:47.164 Namespace Sharing Capabilities: Private 00:09:47.164 Size (in LBAs): 1048576 (4GiB) 00:09:47.164 Capacity (in LBAs): 1048576 (4GiB) 00:09:47.164 Utilization (in LBAs): 1048576 (4GiB) 00:09:47.164 Thin Provisioning: Not Supported 00:09:47.164 Per-NS Atomic Units: No 00:09:47.164 Maximum Single Source Range Length: 128 00:09:47.164 Maximum Copy Length: 128 00:09:47.164 Maximum Source Range Count: 128 00:09:47.164 NGUID/EUI64 Never Reused: No 00:09:47.164 Namespace Write Protected: No 00:09:47.164 Number of LBA Formats: 8 00:09:47.164 Current LBA Format: LBA Format #04 00:09:47.164 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:47.164 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:47.164 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:47.164 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:47.164 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:47.164 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:47.164 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:47.164 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:47.164 00:09:47.164 Namespace ID:2 00:09:47.164 Error Recovery Timeout: Unlimited 00:09:47.164 Command Set Identifier: NVM (00h) 00:09:47.164 Deallocate: Supported 00:09:47.164 Deallocated/Unwritten Error: Supported 00:09:47.164 Deallocated Read Value: All 0x00 00:09:47.164 Deallocate in Write Zeroes: Not Supported 00:09:47.164 Deallocated Guard Field: 0xFFFF 00:09:47.164 Flush: Supported 00:09:47.164 Reservation: Not Supported 00:09:47.164 Namespace Sharing Capabilities: Private 00:09:47.164 Size (in LBAs): 1048576 (4GiB) 00:09:47.164 Capacity (in LBAs): 1048576 (4GiB) 00:09:47.164 Utilization (in LBAs): 1048576 (4GiB) 00:09:47.164 Thin Provisioning: Not Supported 00:09:47.164 Per-NS Atomic Units: No 00:09:47.164 Maximum Single Source Range Length: 128 00:09:47.164 Maximum Copy Length: 128 00:09:47.164 Maximum Source Range Count: 128 00:09:47.164 NGUID/EUI64 Never Reused: No 00:09:47.164 Namespace Write Protected: No 00:09:47.164 Number of LBA Formats: 8 00:09:47.164 Current LBA Format: LBA Format #04 00:09:47.164 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:47.164 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:47.164 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:47.164 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:47.164 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:47.164 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:47.164 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:47.164 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:47.164 00:09:47.164 Namespace ID:3 00:09:47.164 Error Recovery Timeout: Unlimited 00:09:47.164 Command Set Identifier: NVM (00h) 00:09:47.164 Deallocate: Supported 00:09:47.164 Deallocated/Unwritten Error: Supported 00:09:47.164 Deallocated Read Value: All 0x00 00:09:47.164 Deallocate in Write Zeroes: Not Supported 00:09:47.164 Deallocated Guard Field: 0xFFFF 00:09:47.164 Flush: Supported 00:09:47.164 Reservation: Not Supported 00:09:47.164 Namespace Sharing Capabilities: Private 00:09:47.164 Size (in LBAs): 1048576 (4GiB) 00:09:47.164 Capacity (in LBAs): 1048576 (4GiB) 00:09:47.164 Utilization (in LBAs): 1048576 (4GiB) 00:09:47.164 Thin Provisioning: Not Supported 00:09:47.164 Per-NS Atomic Units: No 00:09:47.164 Maximum Single Source Range Length: 128 00:09:47.164 Maximum Copy Length: 128 00:09:47.164 Maximum Source Range Count: 128 00:09:47.164 NGUID/EUI64 Never Reused: No 00:09:47.164 Namespace Write Protected: No 00:09:47.164 Number of LBA Formats: 8 00:09:47.164 Current LBA Format: LBA Format #04 00:09:47.164 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:47.164 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:47.164 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:47.164 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:47.164 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:47.164 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:47.164 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:47.164 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:47.164 00:09:47.164 15:45:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:47.164 15:45:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:47.423 ===================================================== 00:09:47.423 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:47.423 ===================================================== 00:09:47.423 Controller Capabilities/Features 00:09:47.423 ================================ 00:09:47.423 Vendor ID: 1b36 00:09:47.423 Subsystem Vendor ID: 1af4 00:09:47.423 Serial Number: 12340 00:09:47.423 Model Number: QEMU NVMe Ctrl 00:09:47.423 Firmware Version: 8.0.0 00:09:47.423 Recommended Arb Burst: 6 00:09:47.423 IEEE OUI Identifier: 00 54 52 00:09:47.423 Multi-path I/O 00:09:47.423 May have multiple subsystem ports: No 00:09:47.423 May have multiple controllers: No 00:09:47.423 Associated with SR-IOV VF: No 00:09:47.423 Max Data Transfer Size: 524288 00:09:47.423 Max Number of Namespaces: 256 00:09:47.424 Max Number of I/O Queues: 64 00:09:47.424 NVMe Specification Version (VS): 1.4 00:09:47.424 NVMe Specification Version (Identify): 1.4 00:09:47.424 Maximum Queue Entries: 2048 00:09:47.424 Contiguous Queues Required: Yes 00:09:47.424 Arbitration Mechanisms Supported 00:09:47.424 Weighted Round Robin: Not Supported 00:09:47.424 Vendor Specific: Not Supported 00:09:47.424 Reset Timeout: 7500 ms 00:09:47.424 Doorbell Stride: 4 bytes 00:09:47.424 NVM Subsystem Reset: Not Supported 00:09:47.424 Command Sets Supported 00:09:47.424 NVM Command Set: Supported 00:09:47.424 Boot Partition: Not Supported 00:09:47.424 Memory Page Size Minimum: 4096 bytes 00:09:47.424 Memory Page Size Maximum: 65536 bytes 00:09:47.424 Persistent Memory Region: Not Supported 00:09:47.424 Optional Asynchronous Events Supported 00:09:47.424 Namespace Attribute Notices: Supported 00:09:47.424 Firmware Activation Notices: Not Supported 00:09:47.424 ANA Change Notices: Not Supported 00:09:47.424 PLE Aggregate Log Change Notices: Not Supported 00:09:47.424 LBA Status Info Alert Notices: Not Supported 00:09:47.424 EGE Aggregate Log Change Notices: Not Supported 00:09:47.424 Normal NVM Subsystem Shutdown event: Not Supported 00:09:47.424 Zone Descriptor Change Notices: Not Supported 00:09:47.424 Discovery Log Change Notices: Not Supported 00:09:47.424 Controller Attributes 00:09:47.424 128-bit Host Identifier: Not Supported 00:09:47.424 Non-Operational Permissive Mode: Not Supported 00:09:47.424 NVM Sets: Not Supported 00:09:47.424 Read Recovery Levels: Not Supported 00:09:47.424 Endurance Groups: Not Supported 00:09:47.424 Predictable Latency Mode: Not Supported 00:09:47.424 Traffic Based Keep ALive: Not Supported 00:09:47.424 Namespace Granularity: Not Supported 00:09:47.424 SQ Associations: Not Supported 00:09:47.424 UUID List: Not Supported 00:09:47.424 Multi-Domain Subsystem: Not Supported 00:09:47.424 Fixed Capacity Management: Not Supported 00:09:47.424 Variable Capacity Management: Not Supported 00:09:47.424 Delete Endurance Group: Not Supported 00:09:47.424 Delete NVM Set: Not Supported 00:09:47.424 Extended LBA Formats Supported: Supported 00:09:47.424 Flexible Data Placement Supported: Not Supported 00:09:47.424 00:09:47.424 Controller Memory Buffer Support 00:09:47.424 ================================ 00:09:47.424 Supported: No 00:09:47.424 00:09:47.424 Persistent Memory Region Support 00:09:47.424 ================================ 00:09:47.424 Supported: No 00:09:47.424 00:09:47.424 Admin Command Set Attributes 00:09:47.424 ============================ 00:09:47.424 Security Send/Receive: Not Supported 00:09:47.424 Format NVM: Supported 00:09:47.424 Firmware Activate/Download: Not Supported 00:09:47.424 Namespace Management: Supported 00:09:47.424 Device Self-Test: Not Supported 00:09:47.424 Directives: Supported 00:09:47.424 NVMe-MI: Not Supported 00:09:47.424 Virtualization Management: Not Supported 00:09:47.424 Doorbell Buffer Config: Supported 00:09:47.424 Get LBA Status Capability: Not Supported 00:09:47.424 Command & Feature Lockdown Capability: Not Supported 00:09:47.424 Abort Command Limit: 4 00:09:47.424 Async Event Request Limit: 4 00:09:47.424 Number of Firmware Slots: N/A 00:09:47.424 Firmware Slot 1 Read-Only: N/A 00:09:47.424 Firmware Activation Without Reset: N/A 00:09:47.424 Multiple Update Detection Support: N/A 00:09:47.424 Firmware Update Granularity: No Information Provided 00:09:47.424 Per-Namespace SMART Log: Yes 00:09:47.424 Asymmetric Namespace Access Log Page: Not Supported 00:09:47.424 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:47.424 Command Effects Log Page: Supported 00:09:47.424 Get Log Page Extended Data: Supported 00:09:47.424 Telemetry Log Pages: Not Supported 00:09:47.424 Persistent Event Log Pages: Not Supported 00:09:47.424 Supported Log Pages Log Page: May Support 00:09:47.424 Commands Supported & Effects Log Page: Not Supported 00:09:47.424 Feature Identifiers & Effects Log Page:May Support 00:09:47.424 NVMe-MI Commands & Effects Log Page: May Support 00:09:47.424 Data Area 4 for Telemetry Log: Not Supported 00:09:47.424 Error Log Page Entries Supported: 1 00:09:47.424 Keep Alive: Not Supported 00:09:47.424 00:09:47.424 NVM Command Set Attributes 00:09:47.424 ========================== 00:09:47.424 Submission Queue Entry Size 00:09:47.424 Max: 64 00:09:47.424 Min: 64 00:09:47.424 Completion Queue Entry Size 00:09:47.424 Max: 16 00:09:47.424 Min: 16 00:09:47.424 Number of Namespaces: 256 00:09:47.424 Compare Command: Supported 00:09:47.424 Write Uncorrectable Command: Not Supported 00:09:47.424 Dataset Management Command: Supported 00:09:47.424 Write Zeroes Command: Supported 00:09:47.424 Set Features Save Field: Supported 00:09:47.424 Reservations: Not Supported 00:09:47.424 Timestamp: Supported 00:09:47.424 Copy: Supported 00:09:47.424 Volatile Write Cache: Present 00:09:47.424 Atomic Write Unit (Normal): 1 00:09:47.424 Atomic Write Unit (PFail): 1 00:09:47.424 Atomic Compare & Write Unit: 1 00:09:47.424 Fused Compare & Write: Not Supported 00:09:47.424 Scatter-Gather List 00:09:47.424 SGL Command Set: Supported 00:09:47.424 SGL Keyed: Not Supported 00:09:47.424 SGL Bit Bucket Descriptor: Not Supported 00:09:47.424 SGL Metadata Pointer: Not Supported 00:09:47.424 Oversized SGL: Not Supported 00:09:47.424 SGL Metadata Address: Not Supported 00:09:47.424 SGL Offset: Not Supported 00:09:47.424 Transport SGL Data Block: Not Supported 00:09:47.424 Replay Protected Memory Block: Not Supported 00:09:47.424 00:09:47.424 Firmware Slot Information 00:09:47.424 ========================= 00:09:47.424 Active slot: 1 00:09:47.424 Slot 1 Firmware Revision: 1.0 00:09:47.424 00:09:47.424 00:09:47.424 Commands Supported and Effects 00:09:47.424 ============================== 00:09:47.424 Admin Commands 00:09:47.424 -------------- 00:09:47.424 Delete I/O Submission Queue (00h): Supported 00:09:47.424 Create I/O Submission Queue (01h): Supported 00:09:47.424 Get Log Page (02h): Supported 00:09:47.424 Delete I/O Completion Queue (04h): Supported 00:09:47.424 Create I/O Completion Queue (05h): Supported 00:09:47.424 Identify (06h): Supported 00:09:47.424 Abort (08h): Supported 00:09:47.424 Set Features (09h): Supported 00:09:47.424 Get Features (0Ah): Supported 00:09:47.424 Asynchronous Event Request (0Ch): Supported 00:09:47.424 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:47.424 Directive Send (19h): Supported 00:09:47.424 Directive Receive (1Ah): Supported 00:09:47.424 Virtualization Management (1Ch): Supported 00:09:47.424 Doorbell Buffer Config (7Ch): Supported 00:09:47.424 Format NVM (80h): Supported LBA-Change 00:09:47.424 I/O Commands 00:09:47.424 ------------ 00:09:47.424 Flush (00h): Supported LBA-Change 00:09:47.424 Write (01h): Supported LBA-Change 00:09:47.424 Read (02h): Supported 00:09:47.424 Compare (05h): Supported 00:09:47.424 Write Zeroes (08h): Supported LBA-Change 00:09:47.424 Dataset Management (09h): Supported LBA-Change 00:09:47.424 Unknown (0Ch): Supported 00:09:47.424 Unknown (12h): Supported 00:09:47.424 Copy (19h): Supported LBA-Change 00:09:47.424 Unknown (1Dh): Supported LBA-Change 00:09:47.424 00:09:47.424 Error Log 00:09:47.424 ========= 00:09:47.424 00:09:47.424 Arbitration 00:09:47.424 =========== 00:09:47.424 Arbitration Burst: no limit 00:09:47.424 00:09:47.424 Power Management 00:09:47.424 ================ 00:09:47.424 Number of Power States: 1 00:09:47.424 Current Power State: Power State #0 00:09:47.424 Power State #0: 00:09:47.424 Max Power: 25.00 W 00:09:47.424 Non-Operational State: Operational 00:09:47.424 Entry Latency: 16 microseconds 00:09:47.424 Exit Latency: 4 microseconds 00:09:47.424 Relative Read Throughput: 0 00:09:47.424 Relative Read Latency: 0 00:09:47.424 Relative Write Throughput: 0 00:09:47.424 Relative Write Latency: 0 00:09:47.424 Idle Power: Not Reported 00:09:47.424 Active Power: Not Reported 00:09:47.424 Non-Operational Permissive Mode: Not Supported 00:09:47.424 00:09:47.424 Health Information 00:09:47.424 ================== 00:09:47.424 Critical Warnings: 00:09:47.424 Available Spare Space: OK 00:09:47.424 Temperature: OK 00:09:47.424 Device Reliability: OK 00:09:47.424 Read Only: No 00:09:47.424 Volatile Memory Backup: OK 00:09:47.424 Current Temperature: 323 Kelvin (50 Celsius) 00:09:47.424 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:47.424 Available Spare: 0% 00:09:47.424 Available Spare Threshold: 0% 00:09:47.424 Life Percentage Used: 0% 00:09:47.424 Data Units Read: 1237 00:09:47.424 Data Units Written: 1065 00:09:47.424 Host Read Commands: 56867 00:09:47.424 Host Write Commands: 55310 00:09:47.424 Controller Busy Time: 0 minutes 00:09:47.424 Power Cycles: 0 00:09:47.424 Power On Hours: 0 hours 00:09:47.424 Unsafe Shutdowns: 0 00:09:47.424 Unrecoverable Media Errors: 0 00:09:47.424 Lifetime Error Log Entries: 0 00:09:47.424 Warning Temperature Time: 0 minutes 00:09:47.424 Critical Temperature Time: 0 minutes 00:09:47.424 00:09:47.424 Number of Queues 00:09:47.424 ================ 00:09:47.424 Number of I/O Submission Queues: 64 00:09:47.424 Number of I/O Completion Queues: 64 00:09:47.424 00:09:47.424 ZNS Specific Controller Data 00:09:47.424 ============================ 00:09:47.424 Zone Append Size Limit: 0 00:09:47.424 00:09:47.424 00:09:47.424 Active Namespaces 00:09:47.424 ================= 00:09:47.424 Namespace ID:1 00:09:47.424 Error Recovery Timeout: Unlimited 00:09:47.424 Command Set Identifier: NVM (00h) 00:09:47.424 Deallocate: Supported 00:09:47.424 Deallocated/Unwritten Error: Supported 00:09:47.424 Deallocated Read Value: All 0x00 00:09:47.424 Deallocate in Write Zeroes: Not Supported 00:09:47.424 Deallocated Guard Field: 0xFFFF 00:09:47.424 Flush: Supported 00:09:47.424 Reservation: Not Supported 00:09:47.424 Metadata Transferred as: Separate Metadata Buffer 00:09:47.424 Namespace Sharing Capabilities: Private 00:09:47.424 Size (in LBAs): 1548666 (5GiB) 00:09:47.424 Capacity (in LBAs): 1548666 (5GiB) 00:09:47.424 Utilization (in LBAs): 1548666 (5GiB) 00:09:47.424 Thin Provisioning: Not Supported 00:09:47.424 Per-NS Atomic Units: No 00:09:47.424 Maximum Single Source Range Length: 128 00:09:47.424 Maximum Copy Length: 128 00:09:47.424 Maximum Source Range Count: 128 00:09:47.424 NGUID/EUI64 Never Reused: No 00:09:47.424 Namespace Write Protected: No 00:09:47.424 Number of LBA Formats: 8 00:09:47.424 Current LBA Format: LBA Format #07 00:09:47.424 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:47.424 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:47.424 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:47.424 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:47.424 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:47.424 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:47.424 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:47.424 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:47.424 00:09:47.424 15:45:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:47.424 15:45:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:47.683 ===================================================== 00:09:47.683 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:47.683 ===================================================== 00:09:47.683 Controller Capabilities/Features 00:09:47.683 ================================ 00:09:47.683 Vendor ID: 1b36 00:09:47.683 Subsystem Vendor ID: 1af4 00:09:47.683 Serial Number: 12341 00:09:47.683 Model Number: QEMU NVMe Ctrl 00:09:47.683 Firmware Version: 8.0.0 00:09:47.683 Recommended Arb Burst: 6 00:09:47.683 IEEE OUI Identifier: 00 54 52 00:09:47.683 Multi-path I/O 00:09:47.683 May have multiple subsystem ports: No 00:09:47.683 May have multiple controllers: No 00:09:47.683 Associated with SR-IOV VF: No 00:09:47.683 Max Data Transfer Size: 524288 00:09:47.683 Max Number of Namespaces: 256 00:09:47.683 Max Number of I/O Queues: 64 00:09:47.683 NVMe Specification Version (VS): 1.4 00:09:47.683 NVMe Specification Version (Identify): 1.4 00:09:47.683 Maximum Queue Entries: 2048 00:09:47.683 Contiguous Queues Required: Yes 00:09:47.683 Arbitration Mechanisms Supported 00:09:47.683 Weighted Round Robin: Not Supported 00:09:47.683 Vendor Specific: Not Supported 00:09:47.683 Reset Timeout: 7500 ms 00:09:47.683 Doorbell Stride: 4 bytes 00:09:47.683 NVM Subsystem Reset: Not Supported 00:09:47.683 Command Sets Supported 00:09:47.683 NVM Command Set: Supported 00:09:47.683 Boot Partition: Not Supported 00:09:47.683 Memory Page Size Minimum: 4096 bytes 00:09:47.683 Memory Page Size Maximum: 65536 bytes 00:09:47.683 Persistent Memory Region: Not Supported 00:09:47.683 Optional Asynchronous Events Supported 00:09:47.683 Namespace Attribute Notices: Supported 00:09:47.683 Firmware Activation Notices: Not Supported 00:09:47.683 ANA Change Notices: Not Supported 00:09:47.683 PLE Aggregate Log Change Notices: Not Supported 00:09:47.683 LBA Status Info Alert Notices: Not Supported 00:09:47.683 EGE Aggregate Log Change Notices: Not Supported 00:09:47.683 Normal NVM Subsystem Shutdown event: Not Supported 00:09:47.683 Zone Descriptor Change Notices: Not Supported 00:09:47.683 Discovery Log Change Notices: Not Supported 00:09:47.683 Controller Attributes 00:09:47.683 128-bit Host Identifier: Not Supported 00:09:47.683 Non-Operational Permissive Mode: Not Supported 00:09:47.683 NVM Sets: Not Supported 00:09:47.683 Read Recovery Levels: Not Supported 00:09:47.683 Endurance Groups: Not Supported 00:09:47.683 Predictable Latency Mode: Not Supported 00:09:47.683 Traffic Based Keep ALive: Not Supported 00:09:47.683 Namespace Granularity: Not Supported 00:09:47.683 SQ Associations: Not Supported 00:09:47.683 UUID List: Not Supported 00:09:47.683 Multi-Domain Subsystem: Not Supported 00:09:47.683 Fixed Capacity Management: Not Supported 00:09:47.683 Variable Capacity Management: Not Supported 00:09:47.683 Delete Endurance Group: Not Supported 00:09:47.683 Delete NVM Set: Not Supported 00:09:47.683 Extended LBA Formats Supported: Supported 00:09:47.683 Flexible Data Placement Supported: Not Supported 00:09:47.683 00:09:47.683 Controller Memory Buffer Support 00:09:47.683 ================================ 00:09:47.683 Supported: No 00:09:47.683 00:09:47.683 Persistent Memory Region Support 00:09:47.683 ================================ 00:09:47.683 Supported: No 00:09:47.683 00:09:47.683 Admin Command Set Attributes 00:09:47.683 ============================ 00:09:47.683 Security Send/Receive: Not Supported 00:09:47.683 Format NVM: Supported 00:09:47.683 Firmware Activate/Download: Not Supported 00:09:47.683 Namespace Management: Supported 00:09:47.683 Device Self-Test: Not Supported 00:09:47.683 Directives: Supported 00:09:47.683 NVMe-MI: Not Supported 00:09:47.683 Virtualization Management: Not Supported 00:09:47.683 Doorbell Buffer Config: Supported 00:09:47.683 Get LBA Status Capability: Not Supported 00:09:47.683 Command & Feature Lockdown Capability: Not Supported 00:09:47.683 Abort Command Limit: 4 00:09:47.683 Async Event Request Limit: 4 00:09:47.683 Number of Firmware Slots: N/A 00:09:47.683 Firmware Slot 1 Read-Only: N/A 00:09:47.683 Firmware Activation Without Reset: N/A 00:09:47.683 Multiple Update Detection Support: N/A 00:09:47.683 Firmware Update Granularity: No Information Provided 00:09:47.683 Per-Namespace SMART Log: Yes 00:09:47.683 Asymmetric Namespace Access Log Page: Not Supported 00:09:47.683 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:47.683 Command Effects Log Page: Supported 00:09:47.683 Get Log Page Extended Data: Supported 00:09:47.683 Telemetry Log Pages: Not Supported 00:09:47.683 Persistent Event Log Pages: Not Supported 00:09:47.683 Supported Log Pages Log Page: May Support 00:09:47.683 Commands Supported & Effects Log Page: Not Supported 00:09:47.683 Feature Identifiers & Effects Log Page:May Support 00:09:47.683 NVMe-MI Commands & Effects Log Page: May Support 00:09:47.683 Data Area 4 for Telemetry Log: Not Supported 00:09:47.683 Error Log Page Entries Supported: 1 00:09:47.683 Keep Alive: Not Supported 00:09:47.683 00:09:47.683 NVM Command Set Attributes 00:09:47.683 ========================== 00:09:47.683 Submission Queue Entry Size 00:09:47.683 Max: 64 00:09:47.683 Min: 64 00:09:47.683 Completion Queue Entry Size 00:09:47.683 Max: 16 00:09:47.683 Min: 16 00:09:47.683 Number of Namespaces: 256 00:09:47.683 Compare Command: Supported 00:09:47.683 Write Uncorrectable Command: Not Supported 00:09:47.683 Dataset Management Command: Supported 00:09:47.683 Write Zeroes Command: Supported 00:09:47.683 Set Features Save Field: Supported 00:09:47.683 Reservations: Not Supported 00:09:47.683 Timestamp: Supported 00:09:47.683 Copy: Supported 00:09:47.683 Volatile Write Cache: Present 00:09:47.683 Atomic Write Unit (Normal): 1 00:09:47.683 Atomic Write Unit (PFail): 1 00:09:47.683 Atomic Compare & Write Unit: 1 00:09:47.683 Fused Compare & Write: Not Supported 00:09:47.683 Scatter-Gather List 00:09:47.683 SGL Command Set: Supported 00:09:47.683 SGL Keyed: Not Supported 00:09:47.683 SGL Bit Bucket Descriptor: Not Supported 00:09:47.683 SGL Metadata Pointer: Not Supported 00:09:47.683 Oversized SGL: Not Supported 00:09:47.683 SGL Metadata Address: Not Supported 00:09:47.683 SGL Offset: Not Supported 00:09:47.683 Transport SGL Data Block: Not Supported 00:09:47.683 Replay Protected Memory Block: Not Supported 00:09:47.683 00:09:47.683 Firmware Slot Information 00:09:47.683 ========================= 00:09:47.683 Active slot: 1 00:09:47.683 Slot 1 Firmware Revision: 1.0 00:09:47.683 00:09:47.683 00:09:47.683 Commands Supported and Effects 00:09:47.683 ============================== 00:09:47.683 Admin Commands 00:09:47.683 -------------- 00:09:47.683 Delete I/O Submission Queue (00h): Supported 00:09:47.683 Create I/O Submission Queue (01h): Supported 00:09:47.683 Get Log Page (02h): Supported 00:09:47.683 Delete I/O Completion Queue (04h): Supported 00:09:47.683 Create I/O Completion Queue (05h): Supported 00:09:47.683 Identify (06h): Supported 00:09:47.683 Abort (08h): Supported 00:09:47.683 Set Features (09h): Supported 00:09:47.683 Get Features (0Ah): Supported 00:09:47.683 Asynchronous Event Request (0Ch): Supported 00:09:47.683 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:47.683 Directive Send (19h): Supported 00:09:47.683 Directive Receive (1Ah): Supported 00:09:47.683 Virtualization Management (1Ch): Supported 00:09:47.683 Doorbell Buffer Config (7Ch): Supported 00:09:47.683 Format NVM (80h): Supported LBA-Change 00:09:47.683 I/O Commands 00:09:47.683 ------------ 00:09:47.683 Flush (00h): Supported LBA-Change 00:09:47.683 Write (01h): Supported LBA-Change 00:09:47.683 Read (02h): Supported 00:09:47.683 Compare (05h): Supported 00:09:47.683 Write Zeroes (08h): Supported LBA-Change 00:09:47.683 Dataset Management (09h): Supported LBA-Change 00:09:47.683 Unknown (0Ch): Supported 00:09:47.683 Unknown (12h): Supported 00:09:47.683 Copy (19h): Supported LBA-Change 00:09:47.683 Unknown (1Dh): Supported LBA-Change 00:09:47.683 00:09:47.683 Error Log 00:09:47.683 ========= 00:09:47.683 00:09:47.683 Arbitration 00:09:47.683 =========== 00:09:47.683 Arbitration Burst: no limit 00:09:47.683 00:09:47.684 Power Management 00:09:47.684 ================ 00:09:47.684 Number of Power States: 1 00:09:47.684 Current Power State: Power State #0 00:09:47.684 Power State #0: 00:09:47.684 Max Power: 25.00 W 00:09:47.684 Non-Operational State: Operational 00:09:47.684 Entry Latency: 16 microseconds 00:09:47.684 Exit Latency: 4 microseconds 00:09:47.684 Relative Read Throughput: 0 00:09:47.684 Relative Read Latency: 0 00:09:47.684 Relative Write Throughput: 0 00:09:47.684 Relative Write Latency: 0 00:09:47.684 Idle Power: Not Reported 00:09:47.684 Active Power: Not Reported 00:09:47.684 Non-Operational Permissive Mode: Not Supported 00:09:47.684 00:09:47.684 Health Information 00:09:47.684 ================== 00:09:47.684 Critical Warnings: 00:09:47.684 Available Spare Space: OK 00:09:47.684 Temperature: OK 00:09:47.684 Device Reliability: OK 00:09:47.684 Read Only: No 00:09:47.684 Volatile Memory Backup: OK 00:09:47.684 Current Temperature: 323 Kelvin (50 Celsius) 00:09:47.684 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:47.684 Available Spare: 0% 00:09:47.684 Available Spare Threshold: 0% 00:09:47.684 Life Percentage Used: 0% 00:09:47.684 Data Units Read: 920 00:09:47.684 Data Units Written: 768 00:09:47.684 Host Read Commands: 40604 00:09:47.684 Host Write Commands: 38333 00:09:47.684 Controller Busy Time: 0 minutes 00:09:47.684 Power Cycles: 0 00:09:47.684 Power On Hours: 0 hours 00:09:47.684 Unsafe Shutdowns: 0 00:09:47.684 Unrecoverable Media Errors: 0 00:09:47.684 Lifetime Error Log Entries: 0 00:09:47.684 Warning Temperature Time: 0 minutes 00:09:47.684 Critical Temperature Time: 0 minutes 00:09:47.684 00:09:47.684 Number of Queues 00:09:47.684 ================ 00:09:47.684 Number of I/O Submission Queues: 64 00:09:47.684 Number of I/O Completion Queues: 64 00:09:47.684 00:09:47.684 ZNS Specific Controller Data 00:09:47.684 ============================ 00:09:47.684 Zone Append Size Limit: 0 00:09:47.684 00:09:47.684 00:09:47.684 Active Namespaces 00:09:47.684 ================= 00:09:47.684 Namespace ID:1 00:09:47.684 Error Recovery Timeout: Unlimited 00:09:47.684 Command Set Identifier: NVM (00h) 00:09:47.684 Deallocate: Supported 00:09:47.684 Deallocated/Unwritten Error: Supported 00:09:47.684 Deallocated Read Value: All 0x00 00:09:47.684 Deallocate in Write Zeroes: Not Supported 00:09:47.684 Deallocated Guard Field: 0xFFFF 00:09:47.684 Flush: Supported 00:09:47.684 Reservation: Not Supported 00:09:47.684 Namespace Sharing Capabilities: Private 00:09:47.684 Size (in LBAs): 1310720 (5GiB) 00:09:47.684 Capacity (in LBAs): 1310720 (5GiB) 00:09:47.684 Utilization (in LBAs): 1310720 (5GiB) 00:09:47.684 Thin Provisioning: Not Supported 00:09:47.684 Per-NS Atomic Units: No 00:09:47.684 Maximum Single Source Range Length: 128 00:09:47.684 Maximum Copy Length: 128 00:09:47.684 Maximum Source Range Count: 128 00:09:47.684 NGUID/EUI64 Never Reused: No 00:09:47.684 Namespace Write Protected: No 00:09:47.684 Number of LBA Formats: 8 00:09:47.684 Current LBA Format: LBA Format #04 00:09:47.684 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:47.684 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:47.684 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:47.684 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:47.684 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:47.684 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:47.684 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:47.684 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:47.684 00:09:47.684 15:45:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:47.684 15:45:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:47.942 ===================================================== 00:09:47.942 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:47.942 ===================================================== 00:09:47.943 Controller Capabilities/Features 00:09:47.943 ================================ 00:09:47.943 Vendor ID: 1b36 00:09:47.943 Subsystem Vendor ID: 1af4 00:09:47.943 Serial Number: 12342 00:09:47.943 Model Number: QEMU NVMe Ctrl 00:09:47.943 Firmware Version: 8.0.0 00:09:47.943 Recommended Arb Burst: 6 00:09:47.943 IEEE OUI Identifier: 00 54 52 00:09:47.943 Multi-path I/O 00:09:47.943 May have multiple subsystem ports: No 00:09:47.943 May have multiple controllers: No 00:09:47.943 Associated with SR-IOV VF: No 00:09:47.943 Max Data Transfer Size: 524288 00:09:47.943 Max Number of Namespaces: 256 00:09:47.943 Max Number of I/O Queues: 64 00:09:47.943 NVMe Specification Version (VS): 1.4 00:09:47.943 NVMe Specification Version (Identify): 1.4 00:09:47.943 Maximum Queue Entries: 2048 00:09:47.943 Contiguous Queues Required: Yes 00:09:47.943 Arbitration Mechanisms Supported 00:09:47.943 Weighted Round Robin: Not Supported 00:09:47.943 Vendor Specific: Not Supported 00:09:47.943 Reset Timeout: 7500 ms 00:09:47.943 Doorbell Stride: 4 bytes 00:09:47.943 NVM Subsystem Reset: Not Supported 00:09:47.943 Command Sets Supported 00:09:47.943 NVM Command Set: Supported 00:09:47.943 Boot Partition: Not Supported 00:09:47.943 Memory Page Size Minimum: 4096 bytes 00:09:47.943 Memory Page Size Maximum: 65536 bytes 00:09:47.943 Persistent Memory Region: Not Supported 00:09:47.943 Optional Asynchronous Events Supported 00:09:47.943 Namespace Attribute Notices: Supported 00:09:47.943 Firmware Activation Notices: Not Supported 00:09:47.943 ANA Change Notices: Not Supported 00:09:47.943 PLE Aggregate Log Change Notices: Not Supported 00:09:47.943 LBA Status Info Alert Notices: Not Supported 00:09:47.943 EGE Aggregate Log Change Notices: Not Supported 00:09:47.943 Normal NVM Subsystem Shutdown event: Not Supported 00:09:47.943 Zone Descriptor Change Notices: Not Supported 00:09:47.943 Discovery Log Change Notices: Not Supported 00:09:47.943 Controller Attributes 00:09:47.943 128-bit Host Identifier: Not Supported 00:09:47.943 Non-Operational Permissive Mode: Not Supported 00:09:47.943 NVM Sets: Not Supported 00:09:47.943 Read Recovery Levels: Not Supported 00:09:47.943 Endurance Groups: Not Supported 00:09:47.943 Predictable Latency Mode: Not Supported 00:09:47.943 Traffic Based Keep ALive: Not Supported 00:09:47.943 Namespace Granularity: Not Supported 00:09:47.943 SQ Associations: Not Supported 00:09:47.943 UUID List: Not Supported 00:09:47.943 Multi-Domain Subsystem: Not Supported 00:09:47.943 Fixed Capacity Management: Not Supported 00:09:47.943 Variable Capacity Management: Not Supported 00:09:47.943 Delete Endurance Group: Not Supported 00:09:47.943 Delete NVM Set: Not Supported 00:09:47.943 Extended LBA Formats Supported: Supported 00:09:47.943 Flexible Data Placement Supported: Not Supported 00:09:47.943 00:09:47.943 Controller Memory Buffer Support 00:09:47.943 ================================ 00:09:47.943 Supported: No 00:09:47.943 00:09:47.943 Persistent Memory Region Support 00:09:47.943 ================================ 00:09:47.943 Supported: No 00:09:47.943 00:09:47.943 Admin Command Set Attributes 00:09:47.943 ============================ 00:09:47.943 Security Send/Receive: Not Supported 00:09:47.943 Format NVM: Supported 00:09:47.943 Firmware Activate/Download: Not Supported 00:09:47.943 Namespace Management: Supported 00:09:47.943 Device Self-Test: Not Supported 00:09:47.943 Directives: Supported 00:09:47.943 NVMe-MI: Not Supported 00:09:47.943 Virtualization Management: Not Supported 00:09:47.943 Doorbell Buffer Config: Supported 00:09:47.943 Get LBA Status Capability: Not Supported 00:09:47.943 Command & Feature Lockdown Capability: Not Supported 00:09:47.943 Abort Command Limit: 4 00:09:47.943 Async Event Request Limit: 4 00:09:47.943 Number of Firmware Slots: N/A 00:09:47.943 Firmware Slot 1 Read-Only: N/A 00:09:47.943 Firmware Activation Without Reset: N/A 00:09:47.943 Multiple Update Detection Support: N/A 00:09:47.943 Firmware Update Granularity: No Information Provided 00:09:47.943 Per-Namespace SMART Log: Yes 00:09:47.943 Asymmetric Namespace Access Log Page: Not Supported 00:09:47.943 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:47.943 Command Effects Log Page: Supported 00:09:47.943 Get Log Page Extended Data: Supported 00:09:47.943 Telemetry Log Pages: Not Supported 00:09:47.943 Persistent Event Log Pages: Not Supported 00:09:47.943 Supported Log Pages Log Page: May Support 00:09:47.943 Commands Supported & Effects Log Page: Not Supported 00:09:47.943 Feature Identifiers & Effects Log Page:May Support 00:09:47.943 NVMe-MI Commands & Effects Log Page: May Support 00:09:47.943 Data Area 4 for Telemetry Log: Not Supported 00:09:47.943 Error Log Page Entries Supported: 1 00:09:47.943 Keep Alive: Not Supported 00:09:47.943 00:09:47.943 NVM Command Set Attributes 00:09:47.943 ========================== 00:09:47.943 Submission Queue Entry Size 00:09:47.943 Max: 64 00:09:47.943 Min: 64 00:09:47.943 Completion Queue Entry Size 00:09:47.943 Max: 16 00:09:47.943 Min: 16 00:09:47.943 Number of Namespaces: 256 00:09:47.943 Compare Command: Supported 00:09:47.943 Write Uncorrectable Command: Not Supported 00:09:47.943 Dataset Management Command: Supported 00:09:47.943 Write Zeroes Command: Supported 00:09:47.943 Set Features Save Field: Supported 00:09:47.943 Reservations: Not Supported 00:09:47.943 Timestamp: Supported 00:09:47.943 Copy: Supported 00:09:47.943 Volatile Write Cache: Present 00:09:47.943 Atomic Write Unit (Normal): 1 00:09:47.943 Atomic Write Unit (PFail): 1 00:09:47.943 Atomic Compare & Write Unit: 1 00:09:47.943 Fused Compare & Write: Not Supported 00:09:47.943 Scatter-Gather List 00:09:47.943 SGL Command Set: Supported 00:09:47.943 SGL Keyed: Not Supported 00:09:47.943 SGL Bit Bucket Descriptor: Not Supported 00:09:47.943 SGL Metadata Pointer: Not Supported 00:09:47.943 Oversized SGL: Not Supported 00:09:47.943 SGL Metadata Address: Not Supported 00:09:47.943 SGL Offset: Not Supported 00:09:47.943 Transport SGL Data Block: Not Supported 00:09:47.943 Replay Protected Memory Block: Not Supported 00:09:47.943 00:09:47.943 Firmware Slot Information 00:09:47.943 ========================= 00:09:47.943 Active slot: 1 00:09:47.943 Slot 1 Firmware Revision: 1.0 00:09:47.943 00:09:47.943 00:09:47.943 Commands Supported and Effects 00:09:47.943 ============================== 00:09:47.943 Admin Commands 00:09:47.943 -------------- 00:09:47.943 Delete I/O Submission Queue (00h): Supported 00:09:47.943 Create I/O Submission Queue (01h): Supported 00:09:47.943 Get Log Page (02h): Supported 00:09:47.943 Delete I/O Completion Queue (04h): Supported 00:09:47.943 Create I/O Completion Queue (05h): Supported 00:09:47.943 Identify (06h): Supported 00:09:47.943 Abort (08h): Supported 00:09:47.943 Set Features (09h): Supported 00:09:47.943 Get Features (0Ah): Supported 00:09:47.943 Asynchronous Event Request (0Ch): Supported 00:09:47.943 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:47.943 Directive Send (19h): Supported 00:09:47.943 Directive Receive (1Ah): Supported 00:09:47.943 Virtualization Management (1Ch): Supported 00:09:47.943 Doorbell Buffer Config (7Ch): Supported 00:09:47.943 Format NVM (80h): Supported LBA-Change 00:09:47.943 I/O Commands 00:09:47.943 ------------ 00:09:47.943 Flush (00h): Supported LBA-Change 00:09:47.943 Write (01h): Supported LBA-Change 00:09:47.943 Read (02h): Supported 00:09:47.943 Compare (05h): Supported 00:09:47.943 Write Zeroes (08h): Supported LBA-Change 00:09:47.943 Dataset Management (09h): Supported LBA-Change 00:09:47.943 Unknown (0Ch): Supported 00:09:47.943 Unknown (12h): Supported 00:09:47.943 Copy (19h): Supported LBA-Change 00:09:47.943 Unknown (1Dh): Supported LBA-Change 00:09:47.943 00:09:47.943 Error Log 00:09:47.943 ========= 00:09:47.943 00:09:47.943 Arbitration 00:09:47.943 =========== 00:09:47.943 Arbitration Burst: no limit 00:09:47.943 00:09:47.943 Power Management 00:09:47.943 ================ 00:09:47.943 Number of Power States: 1 00:09:47.943 Current Power State: Power State #0 00:09:47.943 Power State #0: 00:09:47.944 Max Power: 25.00 W 00:09:47.944 Non-Operational State: Operational 00:09:47.944 Entry Latency: 16 microseconds 00:09:47.944 Exit Latency: 4 microseconds 00:09:47.944 Relative Read Throughput: 0 00:09:47.944 Relative Read Latency: 0 00:09:47.944 Relative Write Throughput: 0 00:09:47.944 Relative Write Latency: 0 00:09:47.944 Idle Power: Not Reported 00:09:47.944 Active Power: Not Reported 00:09:47.944 Non-Operational Permissive Mode: Not Supported 00:09:47.944 00:09:47.944 Health Information 00:09:47.944 ================== 00:09:47.944 Critical Warnings: 00:09:47.944 Available Spare Space: OK 00:09:47.944 Temperature: OK 00:09:47.944 Device Reliability: OK 00:09:47.944 Read Only: No 00:09:47.944 Volatile Memory Backup: OK 00:09:47.944 Current Temperature: 323 Kelvin (50 Celsius) 00:09:47.944 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:47.944 Available Spare: 0% 00:09:47.944 Available Spare Threshold: 0% 00:09:47.944 Life Percentage Used: 0% 00:09:47.944 Data Units Read: 2599 00:09:47.944 Data Units Written: 2280 00:09:47.944 Host Read Commands: 119051 00:09:47.944 Host Write Commands: 114821 00:09:47.944 Controller Busy Time: 0 minutes 00:09:47.944 Power Cycles: 0 00:09:47.944 Power On Hours: 0 hours 00:09:47.944 Unsafe Shutdowns: 0 00:09:47.944 Unrecoverable Media Errors: 0 00:09:47.944 Lifetime Error Log Entries: 0 00:09:47.944 Warning Temperature Time: 0 minutes 00:09:47.944 Critical Temperature Time: 0 minutes 00:09:47.944 00:09:47.944 Number of Queues 00:09:47.944 ================ 00:09:47.944 Number of I/O Submission Queues: 64 00:09:47.944 Number of I/O Completion Queues: 64 00:09:47.944 00:09:47.944 ZNS Specific Controller Data 00:09:47.944 ============================ 00:09:47.944 Zone Append Size Limit: 0 00:09:47.944 00:09:47.944 00:09:47.944 Active Namespaces 00:09:47.944 ================= 00:09:47.944 Namespace ID:1 00:09:47.944 Error Recovery Timeout: Unlimited 00:09:47.944 Command Set Identifier: NVM (00h) 00:09:47.944 Deallocate: Supported 00:09:47.944 Deallocated/Unwritten Error: Supported 00:09:47.944 Deallocated Read Value: All 0x00 00:09:47.944 Deallocate in Write Zeroes: Not Supported 00:09:47.944 Deallocated Guard Field: 0xFFFF 00:09:47.944 Flush: Supported 00:09:47.944 Reservation: Not Supported 00:09:47.944 Namespace Sharing Capabilities: Private 00:09:47.944 Size (in LBAs): 1048576 (4GiB) 00:09:47.944 Capacity (in LBAs): 1048576 (4GiB) 00:09:47.944 Utilization (in LBAs): 1048576 (4GiB) 00:09:47.944 Thin Provisioning: Not Supported 00:09:47.944 Per-NS Atomic Units: No 00:09:47.944 Maximum Single Source Range Length: 128 00:09:47.944 Maximum Copy Length: 128 00:09:47.944 Maximum Source Range Count: 128 00:09:47.944 NGUID/EUI64 Never Reused: No 00:09:47.944 Namespace Write Protected: No 00:09:47.944 Number of LBA Formats: 8 00:09:47.944 Current LBA Format: LBA Format #04 00:09:47.944 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:47.944 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:47.944 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:47.944 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:47.944 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:47.944 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:47.944 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:47.944 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:47.944 00:09:47.944 Namespace ID:2 00:09:47.944 Error Recovery Timeout: Unlimited 00:09:47.944 Command Set Identifier: NVM (00h) 00:09:47.944 Deallocate: Supported 00:09:47.944 Deallocated/Unwritten Error: Supported 00:09:47.944 Deallocated Read Value: All 0x00 00:09:47.944 Deallocate in Write Zeroes: Not Supported 00:09:47.944 Deallocated Guard Field: 0xFFFF 00:09:47.944 Flush: Supported 00:09:47.944 Reservation: Not Supported 00:09:47.944 Namespace Sharing Capabilities: Private 00:09:47.944 Size (in LBAs): 1048576 (4GiB) 00:09:47.944 Capacity (in LBAs): 1048576 (4GiB) 00:09:47.944 Utilization (in LBAs): 1048576 (4GiB) 00:09:47.944 Thin Provisioning: Not Supported 00:09:47.944 Per-NS Atomic Units: No 00:09:47.944 Maximum Single Source Range Length: 128 00:09:47.944 Maximum Copy Length: 128 00:09:47.944 Maximum Source Range Count: 128 00:09:47.944 NGUID/EUI64 Never Reused: No 00:09:47.944 Namespace Write Protected: No 00:09:47.944 Number of LBA Formats: 8 00:09:47.944 Current LBA Format: LBA Format #04 00:09:47.944 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:47.944 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:47.944 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:47.944 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:47.944 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:47.944 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:47.944 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:47.944 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:47.944 00:09:47.944 Namespace ID:3 00:09:47.944 Error Recovery Timeout: Unlimited 00:09:47.944 Command Set Identifier: NVM (00h) 00:09:47.944 Deallocate: Supported 00:09:47.944 Deallocated/Unwritten Error: Supported 00:09:47.944 Deallocated Read Value: All 0x00 00:09:47.944 Deallocate in Write Zeroes: Not Supported 00:09:47.944 Deallocated Guard Field: 0xFFFF 00:09:47.944 Flush: Supported 00:09:47.944 Reservation: Not Supported 00:09:47.944 Namespace Sharing Capabilities: Private 00:09:47.944 Size (in LBAs): 1048576 (4GiB) 00:09:47.944 Capacity (in LBAs): 1048576 (4GiB) 00:09:47.944 Utilization (in LBAs): 1048576 (4GiB) 00:09:47.944 Thin Provisioning: Not Supported 00:09:47.944 Per-NS Atomic Units: No 00:09:47.944 Maximum Single Source Range Length: 128 00:09:47.944 Maximum Copy Length: 128 00:09:47.944 Maximum Source Range Count: 128 00:09:47.944 NGUID/EUI64 Never Reused: No 00:09:47.944 Namespace Write Protected: No 00:09:47.944 Number of LBA Formats: 8 00:09:47.944 Current LBA Format: LBA Format #04 00:09:47.944 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:47.944 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:47.944 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:47.944 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:47.944 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:47.944 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:47.944 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:47.944 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:47.944 00:09:47.944 15:45:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:47.944 15:45:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:48.203 ===================================================== 00:09:48.203 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:48.203 ===================================================== 00:09:48.203 Controller Capabilities/Features 00:09:48.203 ================================ 00:09:48.203 Vendor ID: 1b36 00:09:48.203 Subsystem Vendor ID: 1af4 00:09:48.203 Serial Number: 12343 00:09:48.203 Model Number: QEMU NVMe Ctrl 00:09:48.203 Firmware Version: 8.0.0 00:09:48.203 Recommended Arb Burst: 6 00:09:48.203 IEEE OUI Identifier: 00 54 52 00:09:48.203 Multi-path I/O 00:09:48.203 May have multiple subsystem ports: No 00:09:48.203 May have multiple controllers: Yes 00:09:48.203 Associated with SR-IOV VF: No 00:09:48.203 Max Data Transfer Size: 524288 00:09:48.203 Max Number of Namespaces: 256 00:09:48.203 Max Number of I/O Queues: 64 00:09:48.203 NVMe Specification Version (VS): 1.4 00:09:48.203 NVMe Specification Version (Identify): 1.4 00:09:48.203 Maximum Queue Entries: 2048 00:09:48.203 Contiguous Queues Required: Yes 00:09:48.203 Arbitration Mechanisms Supported 00:09:48.203 Weighted Round Robin: Not Supported 00:09:48.203 Vendor Specific: Not Supported 00:09:48.203 Reset Timeout: 7500 ms 00:09:48.203 Doorbell Stride: 4 bytes 00:09:48.203 NVM Subsystem Reset: Not Supported 00:09:48.203 Command Sets Supported 00:09:48.203 NVM Command Set: Supported 00:09:48.203 Boot Partition: Not Supported 00:09:48.203 Memory Page Size Minimum: 4096 bytes 00:09:48.203 Memory Page Size Maximum: 65536 bytes 00:09:48.203 Persistent Memory Region: Not Supported 00:09:48.203 Optional Asynchronous Events Supported 00:09:48.203 Namespace Attribute Notices: Supported 00:09:48.203 Firmware Activation Notices: Not Supported 00:09:48.203 ANA Change Notices: Not Supported 00:09:48.203 PLE Aggregate Log Change Notices: Not Supported 00:09:48.203 LBA Status Info Alert Notices: Not Supported 00:09:48.203 EGE Aggregate Log Change Notices: Not Supported 00:09:48.203 Normal NVM Subsystem Shutdown event: Not Supported 00:09:48.203 Zone Descriptor Change Notices: Not Supported 00:09:48.203 Discovery Log Change Notices: Not Supported 00:09:48.203 Controller Attributes 00:09:48.203 128-bit Host Identifier: Not Supported 00:09:48.203 Non-Operational Permissive Mode: Not Supported 00:09:48.203 NVM Sets: Not Supported 00:09:48.203 Read Recovery Levels: Not Supported 00:09:48.203 Endurance Groups: Supported 00:09:48.203 Predictable Latency Mode: Not Supported 00:09:48.203 Traffic Based Keep ALive: Not Supported 00:09:48.203 Namespace Granularity: Not Supported 00:09:48.203 SQ Associations: Not Supported 00:09:48.203 UUID List: Not Supported 00:09:48.203 Multi-Domain Subsystem: Not Supported 00:09:48.203 Fixed Capacity Management: Not Supported 00:09:48.203 Variable Capacity Management: Not Supported 00:09:48.203 Delete Endurance Group: Not Supported 00:09:48.203 Delete NVM Set: Not Supported 00:09:48.203 Extended LBA Formats Supported: Supported 00:09:48.204 Flexible Data Placement Supported: Supported 00:09:48.204 00:09:48.204 Controller Memory Buffer Support 00:09:48.204 ================================ 00:09:48.204 Supported: No 00:09:48.204 00:09:48.204 Persistent Memory Region Support 00:09:48.204 ================================ 00:09:48.204 Supported: No 00:09:48.204 00:09:48.204 Admin Command Set Attributes 00:09:48.204 ============================ 00:09:48.204 Security Send/Receive: Not Supported 00:09:48.204 Format NVM: Supported 00:09:48.204 Firmware Activate/Download: Not Supported 00:09:48.204 Namespace Management: Supported 00:09:48.204 Device Self-Test: Not Supported 00:09:48.204 Directives: Supported 00:09:48.204 NVMe-MI: Not Supported 00:09:48.204 Virtualization Management: Not Supported 00:09:48.204 Doorbell Buffer Config: Supported 00:09:48.204 Get LBA Status Capability: Not Supported 00:09:48.204 Command & Feature Lockdown Capability: Not Supported 00:09:48.204 Abort Command Limit: 4 00:09:48.204 Async Event Request Limit: 4 00:09:48.204 Number of Firmware Slots: N/A 00:09:48.204 Firmware Slot 1 Read-Only: N/A 00:09:48.204 Firmware Activation Without Reset: N/A 00:09:48.204 Multiple Update Detection Support: N/A 00:09:48.204 Firmware Update Granularity: No Information Provided 00:09:48.204 Per-Namespace SMART Log: Yes 00:09:48.204 Asymmetric Namespace Access Log Page: Not Supported 00:09:48.204 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:48.204 Command Effects Log Page: Supported 00:09:48.204 Get Log Page Extended Data: Supported 00:09:48.204 Telemetry Log Pages: Not Supported 00:09:48.204 Persistent Event Log Pages: Not Supported 00:09:48.204 Supported Log Pages Log Page: May Support 00:09:48.204 Commands Supported & Effects Log Page: Not Supported 00:09:48.204 Feature Identifiers & Effects Log Page:May Support 00:09:48.204 NVMe-MI Commands & Effects Log Page: May Support 00:09:48.204 Data Area 4 for Telemetry Log: Not Supported 00:09:48.204 Error Log Page Entries Supported: 1 00:09:48.204 Keep Alive: Not Supported 00:09:48.204 00:09:48.204 NVM Command Set Attributes 00:09:48.204 ========================== 00:09:48.204 Submission Queue Entry Size 00:09:48.204 Max: 64 00:09:48.204 Min: 64 00:09:48.204 Completion Queue Entry Size 00:09:48.204 Max: 16 00:09:48.204 Min: 16 00:09:48.204 Number of Namespaces: 256 00:09:48.204 Compare Command: Supported 00:09:48.204 Write Uncorrectable Command: Not Supported 00:09:48.204 Dataset Management Command: Supported 00:09:48.204 Write Zeroes Command: Supported 00:09:48.204 Set Features Save Field: Supported 00:09:48.204 Reservations: Not Supported 00:09:48.204 Timestamp: Supported 00:09:48.204 Copy: Supported 00:09:48.204 Volatile Write Cache: Present 00:09:48.204 Atomic Write Unit (Normal): 1 00:09:48.204 Atomic Write Unit (PFail): 1 00:09:48.204 Atomic Compare & Write Unit: 1 00:09:48.204 Fused Compare & Write: Not Supported 00:09:48.204 Scatter-Gather List 00:09:48.204 SGL Command Set: Supported 00:09:48.204 SGL Keyed: Not Supported 00:09:48.204 SGL Bit Bucket Descriptor: Not Supported 00:09:48.204 SGL Metadata Pointer: Not Supported 00:09:48.204 Oversized SGL: Not Supported 00:09:48.204 SGL Metadata Address: Not Supported 00:09:48.204 SGL Offset: Not Supported 00:09:48.204 Transport SGL Data Block: Not Supported 00:09:48.204 Replay Protected Memory Block: Not Supported 00:09:48.204 00:09:48.204 Firmware Slot Information 00:09:48.204 ========================= 00:09:48.204 Active slot: 1 00:09:48.204 Slot 1 Firmware Revision: 1.0 00:09:48.204 00:09:48.204 00:09:48.204 Commands Supported and Effects 00:09:48.204 ============================== 00:09:48.204 Admin Commands 00:09:48.204 -------------- 00:09:48.204 Delete I/O Submission Queue (00h): Supported 00:09:48.204 Create I/O Submission Queue (01h): Supported 00:09:48.204 Get Log Page (02h): Supported 00:09:48.204 Delete I/O Completion Queue (04h): Supported 00:09:48.204 Create I/O Completion Queue (05h): Supported 00:09:48.204 Identify (06h): Supported 00:09:48.204 Abort (08h): Supported 00:09:48.204 Set Features (09h): Supported 00:09:48.204 Get Features (0Ah): Supported 00:09:48.204 Asynchronous Event Request (0Ch): Supported 00:09:48.204 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:48.204 Directive Send (19h): Supported 00:09:48.204 Directive Receive (1Ah): Supported 00:09:48.204 Virtualization Management (1Ch): Supported 00:09:48.204 Doorbell Buffer Config (7Ch): Supported 00:09:48.204 Format NVM (80h): Supported LBA-Change 00:09:48.204 I/O Commands 00:09:48.204 ------------ 00:09:48.204 Flush (00h): Supported LBA-Change 00:09:48.204 Write (01h): Supported LBA-Change 00:09:48.204 Read (02h): Supported 00:09:48.204 Compare (05h): Supported 00:09:48.204 Write Zeroes (08h): Supported LBA-Change 00:09:48.204 Dataset Management (09h): Supported LBA-Change 00:09:48.204 Unknown (0Ch): Supported 00:09:48.204 Unknown (12h): Supported 00:09:48.204 Copy (19h): Supported LBA-Change 00:09:48.204 Unknown (1Dh): Supported LBA-Change 00:09:48.204 00:09:48.204 Error Log 00:09:48.204 ========= 00:09:48.204 00:09:48.204 Arbitration 00:09:48.204 =========== 00:09:48.204 Arbitration Burst: no limit 00:09:48.204 00:09:48.204 Power Management 00:09:48.204 ================ 00:09:48.204 Number of Power States: 1 00:09:48.204 Current Power State: Power State #0 00:09:48.204 Power State #0: 00:09:48.204 Max Power: 25.00 W 00:09:48.204 Non-Operational State: Operational 00:09:48.204 Entry Latency: 16 microseconds 00:09:48.204 Exit Latency: 4 microseconds 00:09:48.204 Relative Read Throughput: 0 00:09:48.204 Relative Read Latency: 0 00:09:48.204 Relative Write Throughput: 0 00:09:48.204 Relative Write Latency: 0 00:09:48.204 Idle Power: Not Reported 00:09:48.204 Active Power: Not Reported 00:09:48.204 Non-Operational Permissive Mode: Not Supported 00:09:48.204 00:09:48.204 Health Information 00:09:48.204 ================== 00:09:48.204 Critical Warnings: 00:09:48.204 Available Spare Space: OK 00:09:48.204 Temperature: OK 00:09:48.204 Device Reliability: OK 00:09:48.204 Read Only: No 00:09:48.204 Volatile Memory Backup: OK 00:09:48.204 Current Temperature: 323 Kelvin (50 Celsius) 00:09:48.204 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:48.204 Available Spare: 0% 00:09:48.204 Available Spare Threshold: 0% 00:09:48.204 Life Percentage Used: 0% 00:09:48.204 Data Units Read: 916 00:09:48.204 Data Units Written: 810 00:09:48.204 Host Read Commands: 40252 00:09:48.204 Host Write Commands: 38842 00:09:48.204 Controller Busy Time: 0 minutes 00:09:48.204 Power Cycles: 0 00:09:48.204 Power On Hours: 0 hours 00:09:48.204 Unsafe Shutdowns: 0 00:09:48.204 Unrecoverable Media Errors: 0 00:09:48.204 Lifetime Error Log Entries: 0 00:09:48.204 Warning Temperature Time: 0 minutes 00:09:48.204 Critical Temperature Time: 0 minutes 00:09:48.204 00:09:48.204 Number of Queues 00:09:48.204 ================ 00:09:48.204 Number of I/O Submission Queues: 64 00:09:48.204 Number of I/O Completion Queues: 64 00:09:48.204 00:09:48.204 ZNS Specific Controller Data 00:09:48.204 ============================ 00:09:48.204 Zone Append Size Limit: 0 00:09:48.204 00:09:48.204 00:09:48.204 Active Namespaces 00:09:48.204 ================= 00:09:48.204 Namespace ID:1 00:09:48.204 Error Recovery Timeout: Unlimited 00:09:48.204 Command Set Identifier: NVM (00h) 00:09:48.204 Deallocate: Supported 00:09:48.204 Deallocated/Unwritten Error: Supported 00:09:48.204 Deallocated Read Value: All 0x00 00:09:48.204 Deallocate in Write Zeroes: Not Supported 00:09:48.204 Deallocated Guard Field: 0xFFFF 00:09:48.204 Flush: Supported 00:09:48.204 Reservation: Not Supported 00:09:48.204 Namespace Sharing Capabilities: Multiple Controllers 00:09:48.204 Size (in LBAs): 262144 (1GiB) 00:09:48.204 Capacity (in LBAs): 262144 (1GiB) 00:09:48.204 Utilization (in LBAs): 262144 (1GiB) 00:09:48.204 Thin Provisioning: Not Supported 00:09:48.204 Per-NS Atomic Units: No 00:09:48.204 Maximum Single Source Range Length: 128 00:09:48.204 Maximum Copy Length: 128 00:09:48.204 Maximum Source Range Count: 128 00:09:48.204 NGUID/EUI64 Never Reused: No 00:09:48.204 Namespace Write Protected: No 00:09:48.204 Endurance group ID: 1 00:09:48.204 Number of LBA Formats: 8 00:09:48.204 Current LBA Format: LBA Format #04 00:09:48.204 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:48.204 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:48.204 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:48.204 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:48.204 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:48.204 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:48.204 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:48.204 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:48.204 00:09:48.204 Get Feature FDP: 00:09:48.204 ================ 00:09:48.204 Enabled: Yes 00:09:48.204 FDP configuration index: 0 00:09:48.204 00:09:48.204 FDP configurations log page 00:09:48.204 =========================== 00:09:48.204 Number of FDP configurations: 1 00:09:48.204 Version: 0 00:09:48.204 Size: 112 00:09:48.205 FDP Configuration Descriptor: 0 00:09:48.205 Descriptor Size: 96 00:09:48.205 Reclaim Group Identifier format: 2 00:09:48.205 FDP Volatile Write Cache: Not Present 00:09:48.205 FDP Configuration: Valid 00:09:48.205 Vendor Specific Size: 0 00:09:48.205 Number of Reclaim Groups: 2 00:09:48.205 Number of Recalim Unit Handles: 8 00:09:48.205 Max Placement Identifiers: 128 00:09:48.205 Number of Namespaces Suppprted: 256 00:09:48.205 Reclaim unit Nominal Size: 6000000 bytes 00:09:48.205 Estimated Reclaim Unit Time Limit: Not Reported 00:09:48.205 RUH Desc #000: RUH Type: Initially Isolated 00:09:48.205 RUH Desc #001: RUH Type: Initially Isolated 00:09:48.205 RUH Desc #002: RUH Type: Initially Isolated 00:09:48.205 RUH Desc #003: RUH Type: Initially Isolated 00:09:48.205 RUH Desc #004: RUH Type: Initially Isolated 00:09:48.205 RUH Desc #005: RUH Type: Initially Isolated 00:09:48.205 RUH Desc #006: RUH Type: Initially Isolated 00:09:48.205 RUH Desc #007: RUH Type: Initially Isolated 00:09:48.205 00:09:48.205 FDP reclaim unit handle usage log page 00:09:48.205 ====================================== 00:09:48.205 Number of Reclaim Unit Handles: 8 00:09:48.205 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:48.205 RUH Usage Desc #001: RUH Attributes: Unused 00:09:48.205 RUH Usage Desc #002: RUH Attributes: Unused 00:09:48.205 RUH Usage Desc #003: RUH Attributes: Unused 00:09:48.205 RUH Usage Desc #004: RUH Attributes: Unused 00:09:48.205 RUH Usage Desc #005: RUH Attributes: Unused 00:09:48.205 RUH Usage Desc #006: RUH Attributes: Unused 00:09:48.205 RUH Usage Desc #007: RUH Attributes: Unused 00:09:48.205 00:09:48.205 FDP statistics log page 00:09:48.205 ======================= 00:09:48.205 Host bytes with metadata written: 528719872 00:09:48.205 Media bytes with metadata written: 528777216 00:09:48.205 Media bytes erased: 0 00:09:48.205 00:09:48.205 FDP events log page 00:09:48.205 =================== 00:09:48.205 Number of FDP events: 0 00:09:48.205 00:09:48.205 ************************************ 00:09:48.205 END TEST nvme_identify 00:09:48.205 ************************************ 00:09:48.205 00:09:48.205 real 0m1.374s 00:09:48.205 user 0m0.459s 00:09:48.205 sys 0m0.706s 00:09:48.205 15:45:22 nvme.nvme_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:48.205 15:45:22 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:48.205 15:45:22 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:48.205 15:45:22 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:48.205 15:45:22 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:48.205 15:45:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:48.205 ************************************ 00:09:48.205 START TEST nvme_perf 00:09:48.205 ************************************ 00:09:48.205 15:45:22 nvme.nvme_perf -- common/autotest_common.sh@1121 -- # nvme_perf 00:09:48.205 15:45:22 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:49.579 Initializing NVMe Controllers 00:09:49.579 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:49.579 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:49.579 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:49.579 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:49.579 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:49.579 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:49.579 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:49.579 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:49.579 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:49.579 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:49.579 Initialization complete. Launching workers. 00:09:49.579 ======================================================== 00:09:49.579 Latency(us) 00:09:49.579 Device Information : IOPS MiB/s Average min max 00:09:49.579 PCIE (0000:00:10.0) NSID 1 from core 0: 14100.29 165.24 9081.82 5953.97 40612.67 00:09:49.579 PCIE (0000:00:11.0) NSID 1 from core 0: 14100.29 165.24 9076.13 5760.47 40043.25 00:09:49.579 PCIE (0000:00:13.0) NSID 1 from core 0: 14100.29 165.24 9068.01 4874.92 40158.36 00:09:49.579 PCIE (0000:00:12.0) NSID 1 from core 0: 14100.29 165.24 9059.82 4442.97 39684.36 00:09:49.579 PCIE (0000:00:12.0) NSID 2 from core 0: 14100.29 165.24 9051.64 4116.05 39180.56 00:09:49.579 PCIE (0000:00:12.0) NSID 3 from core 0: 14164.09 165.99 9002.42 3711.77 33843.65 00:09:49.579 ======================================================== 00:09:49.579 Total : 84665.54 992.17 9056.60 3711.77 40612.67 00:09:49.579 00:09:49.579 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:49.579 ================================================================================= 00:09:49.579 1.00000% : 7790.625us 00:09:49.579 10.00000% : 8053.822us 00:09:49.579 25.00000% : 8317.018us 00:09:49.579 50.00000% : 8580.215us 00:09:49.579 75.00000% : 8896.051us 00:09:49.579 90.00000% : 9211.888us 00:09:49.579 95.00000% : 11370.101us 00:09:49.579 98.00000% : 17265.709us 00:09:49.579 99.00000% : 19792.398us 00:09:49.579 99.50000% : 34741.976us 00:09:49.579 99.90000% : 40427.027us 00:09:49.579 99.99000% : 40637.584us 00:09:49.580 99.99900% : 40637.584us 00:09:49.580 99.99990% : 40637.584us 00:09:49.580 99.99999% : 40637.584us 00:09:49.580 00:09:49.580 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:49.580 ================================================================================= 00:09:49.580 1.00000% : 7843.264us 00:09:49.580 10.00000% : 8106.461us 00:09:49.580 25.00000% : 8317.018us 00:09:49.580 50.00000% : 8580.215us 00:09:49.580 75.00000% : 8843.412us 00:09:49.580 90.00000% : 9159.248us 00:09:49.580 95.00000% : 11106.904us 00:09:49.580 98.00000% : 17370.988us 00:09:49.580 99.00000% : 20318.792us 00:09:49.580 99.50000% : 34531.418us 00:09:49.580 99.90000% : 40005.912us 00:09:49.580 99.99000% : 40216.469us 00:09:49.580 99.99900% : 40216.469us 00:09:49.580 99.99990% : 40216.469us 00:09:49.580 99.99999% : 40216.469us 00:09:49.580 00:09:49.580 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:49.580 ================================================================================= 00:09:49.580 1.00000% : 7790.625us 00:09:49.580 10.00000% : 8106.461us 00:09:49.580 25.00000% : 8317.018us 00:09:49.580 50.00000% : 8580.215us 00:09:49.580 75.00000% : 8843.412us 00:09:49.580 90.00000% : 9106.609us 00:09:49.580 95.00000% : 10685.790us 00:09:49.580 98.00000% : 16634.037us 00:09:49.580 99.00000% : 21687.415us 00:09:49.580 99.50000% : 34531.418us 00:09:49.580 99.90000% : 40005.912us 00:09:49.580 99.99000% : 40216.469us 00:09:49.580 99.99900% : 40216.469us 00:09:49.580 99.99990% : 40216.469us 00:09:49.580 99.99999% : 40216.469us 00:09:49.580 00:09:49.580 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:49.580 ================================================================================= 00:09:49.580 1.00000% : 7790.625us 00:09:49.580 10.00000% : 8106.461us 00:09:49.580 25.00000% : 8317.018us 00:09:49.580 50.00000% : 8580.215us 00:09:49.580 75.00000% : 8843.412us 00:09:49.580 90.00000% : 9159.248us 00:09:49.580 95.00000% : 11212.183us 00:09:49.580 98.00000% : 16739.316us 00:09:49.580 99.00000% : 19055.447us 00:09:49.580 99.50000% : 34110.304us 00:09:49.580 99.90000% : 39584.797us 00:09:49.580 99.99000% : 39795.354us 00:09:49.580 99.99900% : 39795.354us 00:09:49.580 99.99990% : 39795.354us 00:09:49.580 99.99999% : 39795.354us 00:09:49.580 00:09:49.580 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:49.580 ================================================================================= 00:09:49.580 1.00000% : 7790.625us 00:09:49.580 10.00000% : 8106.461us 00:09:49.580 25.00000% : 8317.018us 00:09:49.580 50.00000% : 8580.215us 00:09:49.580 75.00000% : 8843.412us 00:09:49.580 90.00000% : 9159.248us 00:09:49.580 95.00000% : 11633.298us 00:09:49.580 98.00000% : 16739.316us 00:09:49.580 99.00000% : 18844.890us 00:09:49.580 99.50000% : 33689.189us 00:09:49.580 99.90000% : 39163.682us 00:09:49.580 99.99000% : 39374.239us 00:09:49.580 99.99900% : 39374.239us 00:09:49.580 99.99990% : 39374.239us 00:09:49.580 99.99999% : 39374.239us 00:09:49.580 00:09:49.580 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:49.580 ================================================================================= 00:09:49.580 1.00000% : 7790.625us 00:09:49.580 10.00000% : 8106.461us 00:09:49.580 25.00000% : 8317.018us 00:09:49.580 50.00000% : 8580.215us 00:09:49.580 75.00000% : 8843.412us 00:09:49.580 90.00000% : 9159.248us 00:09:49.580 95.00000% : 11738.577us 00:09:49.580 98.00000% : 16844.594us 00:09:49.580 99.00000% : 18950.169us 00:09:49.580 99.50000% : 28214.696us 00:09:49.580 99.90000% : 33689.189us 00:09:49.580 99.99000% : 33899.746us 00:09:49.580 99.99900% : 33899.746us 00:09:49.580 99.99990% : 33899.746us 00:09:49.580 99.99999% : 33899.746us 00:09:49.580 00:09:49.580 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:49.580 ============================================================================== 00:09:49.580 Range in us Cumulative IO count 00:09:49.580 5948.247 - 5974.567: 0.0283% ( 4) 00:09:49.580 5974.567 - 6000.887: 0.0354% ( 1) 00:09:49.580 6000.887 - 6027.206: 0.0495% ( 2) 00:09:49.580 6027.206 - 6053.526: 0.0636% ( 2) 00:09:49.580 6053.526 - 6079.846: 0.0778% ( 2) 00:09:49.580 6079.846 - 6106.165: 0.0848% ( 1) 00:09:49.580 6106.165 - 6132.485: 0.1061% ( 3) 00:09:49.580 6132.485 - 6158.805: 0.1131% ( 1) 00:09:49.580 6158.805 - 6185.124: 0.1273% ( 2) 00:09:49.580 6185.124 - 6211.444: 0.1414% ( 2) 00:09:49.580 6211.444 - 6237.764: 0.1555% ( 2) 00:09:49.580 6237.764 - 6264.084: 0.1768% ( 3) 00:09:49.580 6264.084 - 6290.403: 0.1838% ( 1) 00:09:49.580 6290.403 - 6316.723: 0.1980% ( 2) 00:09:49.580 6316.723 - 6343.043: 0.2121% ( 2) 00:09:49.580 6343.043 - 6369.362: 0.2333% ( 3) 00:09:49.580 6369.362 - 6395.682: 0.2404% ( 1) 00:09:49.580 6395.682 - 6422.002: 0.2616% ( 3) 00:09:49.580 6422.002 - 6448.321: 0.2687% ( 1) 00:09:49.580 6448.321 - 6474.641: 0.2828% ( 2) 00:09:49.580 6474.641 - 6500.961: 0.3040% ( 3) 00:09:49.580 6500.961 - 6527.280: 0.3111% ( 1) 00:09:49.580 6527.280 - 6553.600: 0.3323% ( 3) 00:09:49.580 6553.600 - 6579.920: 0.3464% ( 2) 00:09:49.580 6579.920 - 6606.239: 0.3606% ( 2) 00:09:49.580 6606.239 - 6632.559: 0.3747% ( 2) 00:09:49.580 6632.559 - 6658.879: 0.3818% ( 1) 00:09:49.580 6658.879 - 6685.198: 0.4030% ( 3) 00:09:49.580 6711.518 - 6737.838: 0.4242% ( 3) 00:09:49.580 6737.838 - 6790.477: 0.4525% ( 4) 00:09:49.580 7580.067 - 7632.707: 0.4666% ( 2) 00:09:49.580 7632.707 - 7685.346: 0.5939% ( 18) 00:09:49.580 7685.346 - 7737.986: 0.8555% ( 37) 00:09:49.580 7737.986 - 7790.625: 1.4423% ( 83) 00:09:49.580 7790.625 - 7843.264: 2.4533% ( 143) 00:09:49.580 7843.264 - 7895.904: 3.8532% ( 198) 00:09:49.580 7895.904 - 7948.543: 5.5995% ( 247) 00:09:49.580 7948.543 - 8001.182: 7.6711% ( 293) 00:09:49.580 8001.182 - 8053.822: 10.0184% ( 332) 00:09:49.580 8053.822 - 8106.461: 12.7333% ( 384) 00:09:49.580 8106.461 - 8159.100: 16.0492% ( 469) 00:09:49.580 8159.100 - 8211.740: 19.5984% ( 502) 00:09:49.580 8211.740 - 8264.379: 23.6072% ( 567) 00:09:49.580 8264.379 - 8317.018: 28.0472% ( 628) 00:09:49.580 8317.018 - 8369.658: 32.5226% ( 633) 00:09:49.580 8369.658 - 8422.297: 37.2455% ( 668) 00:09:49.580 8422.297 - 8474.937: 42.0249% ( 676) 00:09:49.580 8474.937 - 8527.576: 46.6770% ( 658) 00:09:49.580 8527.576 - 8580.215: 51.3221% ( 657) 00:09:49.580 8580.215 - 8632.855: 56.0591% ( 670) 00:09:49.580 8632.855 - 8685.494: 60.8102% ( 672) 00:09:49.580 8685.494 - 8738.133: 65.2856% ( 633) 00:09:49.580 8738.133 - 8790.773: 69.5984% ( 610) 00:09:49.580 8790.773 - 8843.412: 73.6143% ( 568) 00:09:49.580 8843.412 - 8896.051: 77.0221% ( 482) 00:09:49.580 8896.051 - 8948.691: 80.1329% ( 440) 00:09:49.580 8948.691 - 9001.330: 82.9044% ( 392) 00:09:49.580 9001.330 - 9053.969: 85.4355% ( 358) 00:09:49.580 9053.969 - 9106.609: 87.5848% ( 304) 00:09:49.580 9106.609 - 9159.248: 89.2039% ( 229) 00:09:49.580 9159.248 - 9211.888: 90.5472% ( 190) 00:09:49.580 9211.888 - 9264.527: 91.4876% ( 133) 00:09:49.580 9264.527 - 9317.166: 92.0037% ( 73) 00:09:49.580 9317.166 - 9369.806: 92.3784% ( 53) 00:09:49.580 9369.806 - 9422.445: 92.5551% ( 25) 00:09:49.580 9422.445 - 9475.084: 92.7460% ( 27) 00:09:49.580 9475.084 - 9527.724: 92.9016% ( 22) 00:09:49.580 9527.724 - 9580.363: 93.0501% ( 21) 00:09:49.580 9580.363 - 9633.002: 93.1773% ( 18) 00:09:49.580 9633.002 - 9685.642: 93.2975% ( 17) 00:09:49.580 9685.642 - 9738.281: 93.4106% ( 16) 00:09:49.580 9738.281 - 9790.920: 93.5308% ( 17) 00:09:49.580 9790.920 - 9843.560: 93.6015% ( 10) 00:09:49.580 9843.560 - 9896.199: 93.6934% ( 13) 00:09:49.580 9896.199 - 9948.839: 93.7429% ( 7) 00:09:49.580 9948.839 - 10001.478: 93.8136% ( 10) 00:09:49.580 10001.478 - 10054.117: 93.8773% ( 9) 00:09:49.580 10054.117 - 10106.757: 93.9338% ( 8) 00:09:49.580 10106.757 - 10159.396: 94.0328% ( 14) 00:09:49.580 10159.396 - 10212.035: 94.0894% ( 8) 00:09:49.580 10212.035 - 10264.675: 94.1742% ( 12) 00:09:49.580 10264.675 - 10317.314: 94.2308% ( 8) 00:09:49.580 10317.314 - 10369.953: 94.2873% ( 8) 00:09:49.580 10369.953 - 10422.593: 94.3439% ( 8) 00:09:49.580 10422.593 - 10475.232: 94.4005% ( 8) 00:09:49.580 10475.232 - 10527.871: 94.4499% ( 7) 00:09:49.580 10527.871 - 10580.511: 94.4924% ( 6) 00:09:49.580 10580.511 - 10633.150: 94.5419% ( 7) 00:09:49.580 10633.150 - 10685.790: 94.5701% ( 4) 00:09:49.580 10685.790 - 10738.429: 94.6055% ( 5) 00:09:49.580 10738.429 - 10791.068: 94.6338% ( 4) 00:09:49.580 10791.068 - 10843.708: 94.6903% ( 8) 00:09:49.580 10843.708 - 10896.347: 94.7257% ( 5) 00:09:49.580 10896.347 - 10948.986: 94.7752% ( 7) 00:09:49.580 10948.986 - 11001.626: 94.8035% ( 4) 00:09:49.580 11001.626 - 11054.265: 94.8529% ( 7) 00:09:49.580 11054.265 - 11106.904: 94.8671% ( 2) 00:09:49.580 11106.904 - 11159.544: 94.9095% ( 6) 00:09:49.580 11159.544 - 11212.183: 94.9236% ( 2) 00:09:49.580 11212.183 - 11264.822: 94.9590% ( 5) 00:09:49.580 11264.822 - 11317.462: 94.9873% ( 4) 00:09:49.580 11317.462 - 11370.101: 95.0297% ( 6) 00:09:49.580 11370.101 - 11422.741: 95.0509% ( 3) 00:09:49.580 11422.741 - 11475.380: 95.0863% ( 5) 00:09:49.580 11475.380 - 11528.019: 95.1075% ( 3) 00:09:49.580 11528.019 - 11580.659: 95.1428% ( 5) 00:09:49.580 11580.659 - 11633.298: 95.1711% ( 4) 00:09:49.580 11633.298 - 11685.937: 95.2064% ( 5) 00:09:49.580 11685.937 - 11738.577: 95.2347% ( 4) 00:09:49.580 11738.577 - 11791.216: 95.2559% ( 3) 00:09:49.580 11791.216 - 11843.855: 95.2842% ( 4) 00:09:49.580 11843.855 - 11896.495: 95.3125% ( 4) 00:09:49.580 11896.495 - 11949.134: 95.3408% ( 4) 00:09:49.580 11949.134 - 12001.773: 95.3549% ( 2) 00:09:49.580 12001.773 - 12054.413: 95.3691% ( 2) 00:09:49.580 12054.413 - 12107.052: 95.3832% ( 2) 00:09:49.580 12107.052 - 12159.692: 95.4115% ( 4) 00:09:49.580 12212.331 - 12264.970: 95.4256% ( 2) 00:09:49.580 12264.970 - 12317.610: 95.4398% ( 2) 00:09:49.580 12317.610 - 12370.249: 95.4468% ( 1) 00:09:49.580 12370.249 - 12422.888: 95.4963% ( 7) 00:09:49.581 12422.888 - 12475.528: 95.5175% ( 3) 00:09:49.581 12475.528 - 12528.167: 95.5317% ( 2) 00:09:49.581 12528.167 - 12580.806: 95.5458% ( 2) 00:09:49.581 12580.806 - 12633.446: 95.5670% ( 3) 00:09:49.581 12633.446 - 12686.085: 95.5812% ( 2) 00:09:49.581 12686.085 - 12738.724: 95.5953% ( 2) 00:09:49.581 12738.724 - 12791.364: 95.6165% ( 3) 00:09:49.581 12791.364 - 12844.003: 95.6236% ( 1) 00:09:49.581 12844.003 - 12896.643: 95.6801% ( 8) 00:09:49.581 12896.643 - 12949.282: 95.7226% ( 6) 00:09:49.581 12949.282 - 13001.921: 95.7650% ( 6) 00:09:49.581 13001.921 - 13054.561: 95.7933% ( 4) 00:09:49.581 13054.561 - 13107.200: 95.8145% ( 3) 00:09:49.581 13107.200 - 13159.839: 95.8640% ( 7) 00:09:49.581 13159.839 - 13212.479: 95.9135% ( 7) 00:09:49.581 13212.479 - 13265.118: 95.9630% ( 7) 00:09:49.581 13265.118 - 13317.757: 96.0054% ( 6) 00:09:49.581 13317.757 - 13370.397: 96.0761% ( 10) 00:09:49.581 13370.397 - 13423.036: 96.1751% ( 14) 00:09:49.581 13423.036 - 13475.676: 96.2316% ( 8) 00:09:49.581 13475.676 - 13580.954: 96.3872% ( 22) 00:09:49.581 13580.954 - 13686.233: 96.5356% ( 21) 00:09:49.581 13686.233 - 13791.512: 96.6912% ( 22) 00:09:49.581 13791.512 - 13896.790: 96.8184% ( 18) 00:09:49.581 13896.790 - 14002.069: 96.8891% ( 10) 00:09:49.581 14002.069 - 14107.348: 96.9457% ( 8) 00:09:49.581 14107.348 - 14212.627: 97.0305% ( 12) 00:09:49.581 14212.627 - 14317.905: 97.1083% ( 11) 00:09:49.581 14317.905 - 14423.184: 97.1790% ( 10) 00:09:49.581 14423.184 - 14528.463: 97.2073% ( 4) 00:09:49.581 14528.463 - 14633.741: 97.2285% ( 3) 00:09:49.581 14633.741 - 14739.020: 97.2709% ( 6) 00:09:49.581 14739.020 - 14844.299: 97.3275% ( 8) 00:09:49.581 14844.299 - 14949.578: 97.3628% ( 5) 00:09:49.581 14949.578 - 15054.856: 97.3982% ( 5) 00:09:49.581 15054.856 - 15160.135: 97.4406% ( 6) 00:09:49.581 15160.135 - 15265.414: 97.4760% ( 5) 00:09:49.581 15265.414 - 15370.692: 97.5113% ( 5) 00:09:49.581 15370.692 - 15475.971: 97.5467% ( 5) 00:09:49.581 15475.971 - 15581.250: 97.5891% ( 6) 00:09:49.581 15581.250 - 15686.529: 97.6244% ( 5) 00:09:49.581 15686.529 - 15791.807: 97.6527% ( 4) 00:09:49.581 15791.807 - 15897.086: 97.6810% ( 4) 00:09:49.581 15897.086 - 16002.365: 97.7093% ( 4) 00:09:49.581 16002.365 - 16107.643: 97.7376% ( 4) 00:09:49.581 16634.037 - 16739.316: 97.7517% ( 2) 00:09:49.581 16739.316 - 16844.594: 97.7870% ( 5) 00:09:49.581 16844.594 - 16949.873: 97.8153% ( 4) 00:09:49.581 16949.873 - 17055.152: 97.8931% ( 11) 00:09:49.581 17055.152 - 17160.431: 97.9567% ( 9) 00:09:49.581 17160.431 - 17265.709: 98.0345% ( 11) 00:09:49.581 17265.709 - 17370.988: 98.0911% ( 8) 00:09:49.581 17370.988 - 17476.267: 98.1688% ( 11) 00:09:49.581 17476.267 - 17581.545: 98.2395% ( 10) 00:09:49.581 17581.545 - 17686.824: 98.3173% ( 11) 00:09:49.581 17686.824 - 17792.103: 98.3880% ( 10) 00:09:49.581 17792.103 - 17897.382: 98.4587% ( 10) 00:09:49.581 17897.382 - 18002.660: 98.5082% ( 7) 00:09:49.581 18002.660 - 18107.939: 98.5365% ( 4) 00:09:49.581 18107.939 - 18213.218: 98.5718% ( 5) 00:09:49.581 18213.218 - 18318.496: 98.6072% ( 5) 00:09:49.581 18318.496 - 18423.775: 98.6425% ( 5) 00:09:49.581 18634.333 - 18739.611: 98.6637% ( 3) 00:09:49.581 18739.611 - 18844.890: 98.6991% ( 5) 00:09:49.581 18844.890 - 18950.169: 98.7344% ( 5) 00:09:49.581 18950.169 - 19055.447: 98.7698% ( 5) 00:09:49.581 19055.447 - 19160.726: 98.8051% ( 5) 00:09:49.581 19160.726 - 19266.005: 98.8546% ( 7) 00:09:49.581 19266.005 - 19371.284: 98.8758% ( 3) 00:09:49.581 19371.284 - 19476.562: 98.9183% ( 6) 00:09:49.581 19476.562 - 19581.841: 98.9536% ( 5) 00:09:49.581 19581.841 - 19687.120: 98.9819% ( 4) 00:09:49.581 19687.120 - 19792.398: 99.0173% ( 5) 00:09:49.581 19792.398 - 19897.677: 99.0597% ( 6) 00:09:49.581 19897.677 - 20002.956: 99.0950% ( 5) 00:09:49.581 33478.631 - 33689.189: 99.1162% ( 3) 00:09:49.581 33689.189 - 33899.746: 99.2011% ( 12) 00:09:49.581 33899.746 - 34110.304: 99.3001% ( 14) 00:09:49.581 34110.304 - 34320.861: 99.3778% ( 11) 00:09:49.581 34320.861 - 34531.418: 99.4697% ( 13) 00:09:49.581 34531.418 - 34741.976: 99.5404% ( 10) 00:09:49.581 34741.976 - 34952.533: 99.5475% ( 1) 00:09:49.581 39374.239 - 39584.797: 99.5829% ( 5) 00:09:49.581 39584.797 - 39795.354: 99.6748% ( 13) 00:09:49.581 39795.354 - 40005.912: 99.7667% ( 13) 00:09:49.581 40005.912 - 40216.469: 99.8586% ( 13) 00:09:49.581 40216.469 - 40427.027: 99.9364% ( 11) 00:09:49.581 40427.027 - 40637.584: 100.0000% ( 9) 00:09:49.581 00:09:49.581 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:49.581 ============================================================================== 00:09:49.581 Range in us Cumulative IO count 00:09:49.581 5737.690 - 5764.010: 0.0071% ( 1) 00:09:49.581 5764.010 - 5790.329: 0.0354% ( 4) 00:09:49.581 5790.329 - 5816.649: 0.0566% ( 3) 00:09:49.581 5816.649 - 5842.969: 0.0636% ( 1) 00:09:49.581 5842.969 - 5869.288: 0.0848% ( 3) 00:09:49.581 5869.288 - 5895.608: 0.0990% ( 2) 00:09:49.581 5895.608 - 5921.928: 0.1202% ( 3) 00:09:49.581 5921.928 - 5948.247: 0.1343% ( 2) 00:09:49.581 5948.247 - 5974.567: 0.1485% ( 2) 00:09:49.581 5974.567 - 6000.887: 0.1697% ( 3) 00:09:49.581 6000.887 - 6027.206: 0.1838% ( 2) 00:09:49.581 6027.206 - 6053.526: 0.2050% ( 3) 00:09:49.581 6053.526 - 6079.846: 0.2192% ( 2) 00:09:49.581 6079.846 - 6106.165: 0.2404% ( 3) 00:09:49.581 6106.165 - 6132.485: 0.2616% ( 3) 00:09:49.581 6132.485 - 6158.805: 0.2757% ( 2) 00:09:49.581 6158.805 - 6185.124: 0.2969% ( 3) 00:09:49.581 6185.124 - 6211.444: 0.3111% ( 2) 00:09:49.581 6211.444 - 6237.764: 0.3323% ( 3) 00:09:49.581 6237.764 - 6264.084: 0.3464% ( 2) 00:09:49.581 6264.084 - 6290.403: 0.3676% ( 3) 00:09:49.581 6290.403 - 6316.723: 0.3818% ( 2) 00:09:49.581 6316.723 - 6343.043: 0.3959% ( 2) 00:09:49.581 6343.043 - 6369.362: 0.4101% ( 2) 00:09:49.581 6369.362 - 6395.682: 0.4313% ( 3) 00:09:49.581 6395.682 - 6422.002: 0.4454% ( 2) 00:09:49.581 6422.002 - 6448.321: 0.4525% ( 1) 00:09:49.581 7632.707 - 7685.346: 0.4878% ( 5) 00:09:49.581 7685.346 - 7737.986: 0.5798% ( 13) 00:09:49.581 7737.986 - 7790.625: 0.7848% ( 29) 00:09:49.581 7790.625 - 7843.264: 1.2373% ( 64) 00:09:49.581 7843.264 - 7895.904: 2.2978% ( 150) 00:09:49.581 7895.904 - 7948.543: 3.6340% ( 189) 00:09:49.581 7948.543 - 8001.182: 5.6278% ( 282) 00:09:49.581 8001.182 - 8053.822: 7.7630% ( 302) 00:09:49.581 8053.822 - 8106.461: 10.2588% ( 353) 00:09:49.581 8106.461 - 8159.100: 13.2848% ( 428) 00:09:49.581 8159.100 - 8211.740: 16.9895% ( 524) 00:09:49.581 8211.740 - 8264.379: 21.0619% ( 576) 00:09:49.581 8264.379 - 8317.018: 25.6080% ( 643) 00:09:49.581 8317.018 - 8369.658: 30.4652% ( 687) 00:09:49.581 8369.658 - 8422.297: 35.6759% ( 737) 00:09:49.581 8422.297 - 8474.937: 41.1411% ( 773) 00:09:49.581 8474.937 - 8527.576: 46.7831% ( 798) 00:09:49.581 8527.576 - 8580.215: 52.5311% ( 813) 00:09:49.581 8580.215 - 8632.855: 57.8973% ( 759) 00:09:49.581 8632.855 - 8685.494: 62.9878% ( 720) 00:09:49.581 8685.494 - 8738.133: 67.9016% ( 695) 00:09:49.581 8738.133 - 8790.773: 72.2002% ( 608) 00:09:49.581 8790.773 - 8843.412: 76.0959% ( 551) 00:09:49.581 8843.412 - 8896.051: 79.5602% ( 490) 00:09:49.581 8896.051 - 8948.691: 82.7559% ( 452) 00:09:49.581 8948.691 - 9001.330: 85.5274% ( 392) 00:09:49.581 9001.330 - 9053.969: 87.7262% ( 311) 00:09:49.581 9053.969 - 9106.609: 89.4584% ( 245) 00:09:49.581 9106.609 - 9159.248: 90.7028% ( 176) 00:09:49.581 9159.248 - 9211.888: 91.6714% ( 137) 00:09:49.581 9211.888 - 9264.527: 92.3006% ( 89) 00:09:49.581 9264.527 - 9317.166: 92.6471% ( 49) 00:09:49.581 9317.166 - 9369.806: 92.8662% ( 31) 00:09:49.581 9369.806 - 9422.445: 93.0288% ( 23) 00:09:49.581 9422.445 - 9475.084: 93.2056% ( 25) 00:09:49.581 9475.084 - 9527.724: 93.3611% ( 22) 00:09:49.581 9527.724 - 9580.363: 93.5025% ( 20) 00:09:49.581 9580.363 - 9633.002: 93.6086% ( 15) 00:09:49.581 9633.002 - 9685.642: 93.6934% ( 12) 00:09:49.581 9685.642 - 9738.281: 93.7641% ( 10) 00:09:49.581 9738.281 - 9790.920: 93.8561% ( 13) 00:09:49.581 9790.920 - 9843.560: 93.9268% ( 10) 00:09:49.581 9843.560 - 9896.199: 94.0187% ( 13) 00:09:49.581 9896.199 - 9948.839: 94.0964% ( 11) 00:09:49.581 9948.839 - 10001.478: 94.1813% ( 12) 00:09:49.581 10001.478 - 10054.117: 94.2378% ( 8) 00:09:49.581 10054.117 - 10106.757: 94.2732% ( 5) 00:09:49.581 10106.757 - 10159.396: 94.3156% ( 6) 00:09:49.581 10159.396 - 10212.035: 94.3510% ( 5) 00:09:49.581 10212.035 - 10264.675: 94.3792% ( 4) 00:09:49.581 10264.675 - 10317.314: 94.4287% ( 7) 00:09:49.581 10317.314 - 10369.953: 94.4782% ( 7) 00:09:49.581 10369.953 - 10422.593: 94.5136% ( 5) 00:09:49.581 10422.593 - 10475.232: 94.5419% ( 4) 00:09:49.581 10475.232 - 10527.871: 94.5772% ( 5) 00:09:49.581 10527.871 - 10580.511: 94.6196% ( 6) 00:09:49.581 10580.511 - 10633.150: 94.6691% ( 7) 00:09:49.581 10633.150 - 10685.790: 94.7327% ( 9) 00:09:49.581 10685.790 - 10738.429: 94.7610% ( 4) 00:09:49.581 10738.429 - 10791.068: 94.7893% ( 4) 00:09:49.581 10791.068 - 10843.708: 94.8247% ( 5) 00:09:49.581 10843.708 - 10896.347: 94.8671% ( 6) 00:09:49.581 10896.347 - 10948.986: 94.8954% ( 4) 00:09:49.581 10948.986 - 11001.626: 94.9378% ( 6) 00:09:49.581 11001.626 - 11054.265: 94.9661% ( 4) 00:09:49.581 11054.265 - 11106.904: 95.0014% ( 5) 00:09:49.581 11106.904 - 11159.544: 95.0368% ( 5) 00:09:49.581 11159.544 - 11212.183: 95.0792% ( 6) 00:09:49.581 11212.183 - 11264.822: 95.1075% ( 4) 00:09:49.581 11264.822 - 11317.462: 95.1499% ( 6) 00:09:49.581 11317.462 - 11370.101: 95.1852% ( 5) 00:09:49.581 11370.101 - 11422.741: 95.2277% ( 6) 00:09:49.581 11422.741 - 11475.380: 95.2630% ( 5) 00:09:49.581 11475.380 - 11528.019: 95.2984% ( 5) 00:09:49.581 11528.019 - 11580.659: 95.3266% ( 4) 00:09:49.581 11580.659 - 11633.298: 95.3691% ( 6) 00:09:49.581 11633.298 - 11685.937: 95.3903% ( 3) 00:09:49.581 11685.937 - 11738.577: 95.4115% ( 3) 00:09:49.581 11738.577 - 11791.216: 95.4256% ( 2) 00:09:49.582 11791.216 - 11843.855: 95.4468% ( 3) 00:09:49.582 11843.855 - 11896.495: 95.4680% ( 3) 00:09:49.582 11896.495 - 11949.134: 95.4751% ( 1) 00:09:49.582 12264.970 - 12317.610: 95.4963% ( 3) 00:09:49.582 12317.610 - 12370.249: 95.5105% ( 2) 00:09:49.582 12370.249 - 12422.888: 95.5317% ( 3) 00:09:49.582 12422.888 - 12475.528: 95.5529% ( 3) 00:09:49.582 12475.528 - 12528.167: 95.5670% ( 2) 00:09:49.582 12528.167 - 12580.806: 95.5882% ( 3) 00:09:49.582 12580.806 - 12633.446: 95.6094% ( 3) 00:09:49.582 12633.446 - 12686.085: 95.6236% ( 2) 00:09:49.582 12686.085 - 12738.724: 95.6377% ( 2) 00:09:49.582 12738.724 - 12791.364: 95.6589% ( 3) 00:09:49.582 12791.364 - 12844.003: 95.6731% ( 2) 00:09:49.582 12844.003 - 12896.643: 95.6943% ( 3) 00:09:49.582 12896.643 - 12949.282: 95.7155% ( 3) 00:09:49.582 12949.282 - 13001.921: 95.7508% ( 5) 00:09:49.582 13001.921 - 13054.561: 95.7862% ( 5) 00:09:49.582 13054.561 - 13107.200: 95.8286% ( 6) 00:09:49.582 13107.200 - 13159.839: 95.8640% ( 5) 00:09:49.582 13159.839 - 13212.479: 95.8923% ( 4) 00:09:49.582 13212.479 - 13265.118: 95.9276% ( 5) 00:09:49.582 13265.118 - 13317.757: 95.9630% ( 5) 00:09:49.582 13317.757 - 13370.397: 95.9983% ( 5) 00:09:49.582 13370.397 - 13423.036: 96.0266% ( 4) 00:09:49.582 13423.036 - 13475.676: 96.0619% ( 5) 00:09:49.582 13475.676 - 13580.954: 96.1256% ( 9) 00:09:49.582 13580.954 - 13686.233: 96.1609% ( 5) 00:09:49.582 13686.233 - 13791.512: 96.2670% ( 15) 00:09:49.582 13791.512 - 13896.790: 96.3518% ( 12) 00:09:49.582 13896.790 - 14002.069: 96.4437% ( 13) 00:09:49.582 14002.069 - 14107.348: 96.5356% ( 13) 00:09:49.582 14107.348 - 14212.627: 96.6275% ( 13) 00:09:49.582 14212.627 - 14317.905: 96.7407% ( 16) 00:09:49.582 14317.905 - 14423.184: 96.9033% ( 23) 00:09:49.582 14423.184 - 14528.463: 97.0235% ( 17) 00:09:49.582 14528.463 - 14633.741: 97.1225% ( 14) 00:09:49.582 14633.741 - 14739.020: 97.2426% ( 17) 00:09:49.582 14739.020 - 14844.299: 97.3275% ( 12) 00:09:49.582 14844.299 - 14949.578: 97.4194% ( 13) 00:09:49.582 14949.578 - 15054.856: 97.5113% ( 13) 00:09:49.582 15054.856 - 15160.135: 97.5962% ( 12) 00:09:49.582 15160.135 - 15265.414: 97.6669% ( 10) 00:09:49.582 15265.414 - 15370.692: 97.7093% ( 6) 00:09:49.582 15370.692 - 15475.971: 97.7376% ( 4) 00:09:49.582 16739.316 - 16844.594: 97.7517% ( 2) 00:09:49.582 16844.594 - 16949.873: 97.7941% ( 6) 00:09:49.582 16949.873 - 17055.152: 97.8295% ( 5) 00:09:49.582 17055.152 - 17160.431: 97.8719% ( 6) 00:09:49.582 17160.431 - 17265.709: 97.9285% ( 8) 00:09:49.582 17265.709 - 17370.988: 98.0133% ( 12) 00:09:49.582 17370.988 - 17476.267: 98.0911% ( 11) 00:09:49.582 17476.267 - 17581.545: 98.1830% ( 13) 00:09:49.582 17581.545 - 17686.824: 98.2678% ( 12) 00:09:49.582 17686.824 - 17792.103: 98.3597% ( 13) 00:09:49.582 17792.103 - 17897.382: 98.4375% ( 11) 00:09:49.582 17897.382 - 18002.660: 98.5223% ( 12) 00:09:49.582 18002.660 - 18107.939: 98.5930% ( 10) 00:09:49.582 18107.939 - 18213.218: 98.6425% ( 7) 00:09:49.582 19266.005 - 19371.284: 98.6496% ( 1) 00:09:49.582 19371.284 - 19476.562: 98.6920% ( 6) 00:09:49.582 19476.562 - 19581.841: 98.7344% ( 6) 00:09:49.582 19581.841 - 19687.120: 98.7769% ( 6) 00:09:49.582 19687.120 - 19792.398: 98.8264% ( 7) 00:09:49.582 19792.398 - 19897.677: 98.8617% ( 5) 00:09:49.582 19897.677 - 20002.956: 98.9041% ( 6) 00:09:49.582 20002.956 - 20108.235: 98.9465% ( 6) 00:09:49.582 20108.235 - 20213.513: 98.9819% ( 5) 00:09:49.582 20213.513 - 20318.792: 99.0243% ( 6) 00:09:49.582 20318.792 - 20424.071: 99.0667% ( 6) 00:09:49.582 20424.071 - 20529.349: 99.0950% ( 4) 00:09:49.582 33478.631 - 33689.189: 99.1869% ( 13) 00:09:49.582 33689.189 - 33899.746: 99.2859% ( 14) 00:09:49.582 33899.746 - 34110.304: 99.3849% ( 14) 00:09:49.582 34110.304 - 34320.861: 99.4839% ( 14) 00:09:49.582 34320.861 - 34531.418: 99.5475% ( 9) 00:09:49.582 38953.124 - 39163.682: 99.5899% ( 6) 00:09:49.582 39163.682 - 39374.239: 99.6889% ( 14) 00:09:49.582 39374.239 - 39584.797: 99.7808% ( 13) 00:09:49.582 39584.797 - 39795.354: 99.8798% ( 14) 00:09:49.582 39795.354 - 40005.912: 99.9788% ( 14) 00:09:49.582 40005.912 - 40216.469: 100.0000% ( 3) 00:09:49.582 00:09:49.582 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:49.582 ============================================================================== 00:09:49.582 Range in us Cumulative IO count 00:09:49.582 4869.141 - 4895.460: 0.0424% ( 6) 00:09:49.582 4895.460 - 4921.780: 0.0636% ( 3) 00:09:49.582 4921.780 - 4948.100: 0.0778% ( 2) 00:09:49.582 4948.100 - 4974.419: 0.0848% ( 1) 00:09:49.582 4974.419 - 5000.739: 0.0919% ( 1) 00:09:49.582 5000.739 - 5027.059: 0.1202% ( 4) 00:09:49.582 5027.059 - 5053.378: 0.1485% ( 4) 00:09:49.582 5053.378 - 5079.698: 0.1697% ( 3) 00:09:49.582 5079.698 - 5106.018: 0.1838% ( 2) 00:09:49.582 5106.018 - 5132.337: 0.1980% ( 2) 00:09:49.582 5132.337 - 5158.657: 0.2121% ( 2) 00:09:49.582 5158.657 - 5184.977: 0.2333% ( 3) 00:09:49.582 5184.977 - 5211.296: 0.2475% ( 2) 00:09:49.582 5211.296 - 5237.616: 0.2616% ( 2) 00:09:49.582 5237.616 - 5263.936: 0.2828% ( 3) 00:09:49.582 5263.936 - 5290.255: 0.2969% ( 2) 00:09:49.582 5290.255 - 5316.575: 0.3182% ( 3) 00:09:49.582 5316.575 - 5342.895: 0.3323% ( 2) 00:09:49.582 5342.895 - 5369.214: 0.3535% ( 3) 00:09:49.582 5369.214 - 5395.534: 0.3676% ( 2) 00:09:49.582 5395.534 - 5421.854: 0.3889% ( 3) 00:09:49.582 5421.854 - 5448.173: 0.4030% ( 2) 00:09:49.582 5448.173 - 5474.493: 0.4242% ( 3) 00:09:49.582 5474.493 - 5500.813: 0.4383% ( 2) 00:09:49.582 5500.813 - 5527.133: 0.4525% ( 2) 00:09:49.582 7316.871 - 7369.510: 0.4949% ( 6) 00:09:49.582 7369.510 - 7422.149: 0.5232% ( 4) 00:09:49.582 7422.149 - 7474.789: 0.5727% ( 7) 00:09:49.582 7474.789 - 7527.428: 0.6010% ( 4) 00:09:49.582 7527.428 - 7580.067: 0.6363% ( 5) 00:09:49.582 7580.067 - 7632.707: 0.6787% ( 6) 00:09:49.582 7632.707 - 7685.346: 0.7070% ( 4) 00:09:49.582 7685.346 - 7737.986: 0.8908% ( 26) 00:09:49.582 7737.986 - 7790.625: 1.0535% ( 23) 00:09:49.582 7790.625 - 7843.264: 1.5979% ( 77) 00:09:49.582 7843.264 - 7895.904: 2.7220% ( 159) 00:09:49.582 7895.904 - 7948.543: 4.1431% ( 201) 00:09:49.582 7948.543 - 8001.182: 6.0803% ( 274) 00:09:49.582 8001.182 - 8053.822: 8.1165% ( 288) 00:09:49.582 8053.822 - 8106.461: 10.7678% ( 375) 00:09:49.582 8106.461 - 8159.100: 13.7443% ( 421) 00:09:49.582 8159.100 - 8211.740: 17.3360% ( 508) 00:09:49.582 8211.740 - 8264.379: 21.3801% ( 572) 00:09:49.582 8264.379 - 8317.018: 25.9898% ( 652) 00:09:49.582 8317.018 - 8369.658: 30.9036% ( 695) 00:09:49.582 8369.658 - 8422.297: 36.4183% ( 780) 00:09:49.582 8422.297 - 8474.937: 41.8269% ( 765) 00:09:49.582 8474.937 - 8527.576: 47.3840% ( 786) 00:09:49.582 8527.576 - 8580.215: 52.9553% ( 788) 00:09:49.582 8580.215 - 8632.855: 58.4417% ( 776) 00:09:49.582 8632.855 - 8685.494: 63.7231% ( 747) 00:09:49.582 8685.494 - 8738.133: 68.4955% ( 675) 00:09:49.582 8738.133 - 8790.773: 72.8224% ( 612) 00:09:49.582 8790.773 - 8843.412: 76.6049% ( 535) 00:09:49.582 8843.412 - 8896.051: 80.1824% ( 506) 00:09:49.582 8896.051 - 8948.691: 83.4064% ( 456) 00:09:49.582 8948.691 - 9001.330: 86.1779% ( 392) 00:09:49.582 9001.330 - 9053.969: 88.4403% ( 320) 00:09:49.582 9053.969 - 9106.609: 90.1089% ( 236) 00:09:49.582 9106.609 - 9159.248: 91.3532% ( 176) 00:09:49.582 9159.248 - 9211.888: 92.1451% ( 112) 00:09:49.582 9211.888 - 9264.527: 92.5622% ( 59) 00:09:49.582 9264.527 - 9317.166: 92.9228% ( 51) 00:09:49.582 9317.166 - 9369.806: 93.1985% ( 39) 00:09:49.582 9369.806 - 9422.445: 93.4318% ( 33) 00:09:49.582 9422.445 - 9475.084: 93.5520% ( 17) 00:09:49.582 9475.084 - 9527.724: 93.6581% ( 15) 00:09:49.582 9527.724 - 9580.363: 93.7359% ( 11) 00:09:49.582 9580.363 - 9633.002: 93.8278% ( 13) 00:09:49.582 9633.002 - 9685.642: 93.9197% ( 13) 00:09:49.582 9685.642 - 9738.281: 94.0116% ( 13) 00:09:49.582 9738.281 - 9790.920: 94.0964% ( 12) 00:09:49.582 9790.920 - 9843.560: 94.1813% ( 12) 00:09:49.582 9843.560 - 9896.199: 94.2308% ( 7) 00:09:49.582 9896.199 - 9948.839: 94.2732% ( 6) 00:09:49.582 9948.839 - 10001.478: 94.3227% ( 7) 00:09:49.582 10001.478 - 10054.117: 94.3792% ( 8) 00:09:49.582 10054.117 - 10106.757: 94.4429% ( 9) 00:09:49.582 10106.757 - 10159.396: 94.4924% ( 7) 00:09:49.582 10159.396 - 10212.035: 94.5489% ( 8) 00:09:49.582 10212.035 - 10264.675: 94.6126% ( 9) 00:09:49.582 10264.675 - 10317.314: 94.6550% ( 6) 00:09:49.582 10317.314 - 10369.953: 94.7045% ( 7) 00:09:49.582 10369.953 - 10422.593: 94.7752% ( 10) 00:09:49.582 10422.593 - 10475.232: 94.8176% ( 6) 00:09:49.582 10475.232 - 10527.871: 94.8742% ( 8) 00:09:49.582 10527.871 - 10580.511: 94.9236% ( 7) 00:09:49.582 10580.511 - 10633.150: 94.9590% ( 5) 00:09:49.582 10633.150 - 10685.790: 95.0014% ( 6) 00:09:49.582 10685.790 - 10738.429: 95.0368% ( 5) 00:09:49.582 10738.429 - 10791.068: 95.0792% ( 6) 00:09:49.582 10791.068 - 10843.708: 95.1075% ( 4) 00:09:49.582 10843.708 - 10896.347: 95.1499% ( 6) 00:09:49.582 10896.347 - 10948.986: 95.1852% ( 5) 00:09:49.582 10948.986 - 11001.626: 95.2206% ( 5) 00:09:49.582 11001.626 - 11054.265: 95.2559% ( 5) 00:09:49.582 11054.265 - 11106.904: 95.2913% ( 5) 00:09:49.582 11106.904 - 11159.544: 95.3266% ( 5) 00:09:49.582 11159.544 - 11212.183: 95.3620% ( 5) 00:09:49.582 11212.183 - 11264.822: 95.3973% ( 5) 00:09:49.582 11264.822 - 11317.462: 95.4186% ( 3) 00:09:49.582 11317.462 - 11370.101: 95.4398% ( 3) 00:09:49.582 11370.101 - 11422.741: 95.4468% ( 1) 00:09:49.582 11422.741 - 11475.380: 95.4610% ( 2) 00:09:49.582 11475.380 - 11528.019: 95.4751% ( 2) 00:09:49.582 11580.659 - 11633.298: 95.4963% ( 3) 00:09:49.582 11633.298 - 11685.937: 95.5105% ( 2) 00:09:49.582 11685.937 - 11738.577: 95.5317% ( 3) 00:09:49.582 11738.577 - 11791.216: 95.5529% ( 3) 00:09:49.582 11791.216 - 11843.855: 95.5741% ( 3) 00:09:49.582 11843.855 - 11896.495: 95.5882% ( 2) 00:09:49.582 11896.495 - 11949.134: 95.6094% ( 3) 00:09:49.582 11949.134 - 12001.773: 95.6307% ( 3) 00:09:49.582 12001.773 - 12054.413: 95.6448% ( 2) 00:09:49.583 12054.413 - 12107.052: 95.6660% ( 3) 00:09:49.583 12107.052 - 12159.692: 95.6872% ( 3) 00:09:49.583 12159.692 - 12212.331: 95.7014% ( 2) 00:09:49.583 12212.331 - 12264.970: 95.7226% ( 3) 00:09:49.583 12264.970 - 12317.610: 95.7650% ( 6) 00:09:49.583 12317.610 - 12370.249: 95.8074% ( 6) 00:09:49.583 12370.249 - 12422.888: 95.8357% ( 4) 00:09:49.583 12422.888 - 12475.528: 95.8710% ( 5) 00:09:49.583 12475.528 - 12528.167: 95.9064% ( 5) 00:09:49.583 12528.167 - 12580.806: 95.9488% ( 6) 00:09:49.583 12580.806 - 12633.446: 95.9842% ( 5) 00:09:49.583 12633.446 - 12686.085: 96.0195% ( 5) 00:09:49.583 12686.085 - 12738.724: 96.0478% ( 4) 00:09:49.583 12738.724 - 12791.364: 96.0902% ( 6) 00:09:49.583 12791.364 - 12844.003: 96.1185% ( 4) 00:09:49.583 12844.003 - 12896.643: 96.1326% ( 2) 00:09:49.583 12896.643 - 12949.282: 96.1468% ( 2) 00:09:49.583 12949.282 - 13001.921: 96.1680% ( 3) 00:09:49.583 13001.921 - 13054.561: 96.1821% ( 2) 00:09:49.583 13054.561 - 13107.200: 96.1963% ( 2) 00:09:49.583 13107.200 - 13159.839: 96.2175% ( 3) 00:09:49.583 13159.839 - 13212.479: 96.2316% ( 2) 00:09:49.583 13212.479 - 13265.118: 96.2458% ( 2) 00:09:49.583 13265.118 - 13317.757: 96.2670% ( 3) 00:09:49.583 13317.757 - 13370.397: 96.2811% ( 2) 00:09:49.583 13370.397 - 13423.036: 96.3023% ( 3) 00:09:49.583 13423.036 - 13475.676: 96.3165% ( 2) 00:09:49.583 13475.676 - 13580.954: 96.3518% ( 5) 00:09:49.583 13580.954 - 13686.233: 96.3801% ( 4) 00:09:49.583 13686.233 - 13791.512: 96.4013% ( 3) 00:09:49.583 13791.512 - 13896.790: 96.4296% ( 4) 00:09:49.583 13896.790 - 14002.069: 96.4579% ( 4) 00:09:49.583 14002.069 - 14107.348: 96.5003% ( 6) 00:09:49.583 14107.348 - 14212.627: 96.5356% ( 5) 00:09:49.583 14212.627 - 14317.905: 96.5781% ( 6) 00:09:49.583 14317.905 - 14423.184: 96.6134% ( 5) 00:09:49.583 14423.184 - 14528.463: 96.6488% ( 5) 00:09:49.583 14528.463 - 14633.741: 96.7407% ( 13) 00:09:49.583 14633.741 - 14739.020: 96.8255% ( 12) 00:09:49.583 14739.020 - 14844.299: 96.9174% ( 13) 00:09:49.583 14844.299 - 14949.578: 97.0023% ( 12) 00:09:49.583 14949.578 - 15054.856: 97.0800% ( 11) 00:09:49.583 15054.856 - 15160.135: 97.1295% ( 7) 00:09:49.583 15160.135 - 15265.414: 97.1790% ( 7) 00:09:49.583 15265.414 - 15370.692: 97.2285% ( 7) 00:09:49.583 15370.692 - 15475.971: 97.2709% ( 6) 00:09:49.583 15475.971 - 15581.250: 97.2851% ( 2) 00:09:49.583 15686.529 - 15791.807: 97.3558% ( 10) 00:09:49.583 15791.807 - 15897.086: 97.4194% ( 9) 00:09:49.583 15897.086 - 16002.365: 97.5042% ( 12) 00:09:49.583 16002.365 - 16107.643: 97.5891% ( 12) 00:09:49.583 16107.643 - 16212.922: 97.6669% ( 11) 00:09:49.583 16212.922 - 16318.201: 97.7517% ( 12) 00:09:49.583 16318.201 - 16423.480: 97.8295% ( 11) 00:09:49.583 16423.480 - 16528.758: 97.9143% ( 12) 00:09:49.583 16528.758 - 16634.037: 98.0062% ( 13) 00:09:49.583 16634.037 - 16739.316: 98.0416% ( 5) 00:09:49.583 16739.316 - 16844.594: 98.0840% ( 6) 00:09:49.583 16844.594 - 16949.873: 98.1193% ( 5) 00:09:49.583 16949.873 - 17055.152: 98.1618% ( 6) 00:09:49.583 17055.152 - 17160.431: 98.1900% ( 4) 00:09:49.583 18423.775 - 18529.054: 98.2254% ( 5) 00:09:49.583 18529.054 - 18634.333: 98.2678% ( 6) 00:09:49.583 18634.333 - 18739.611: 98.3244% ( 8) 00:09:49.583 18739.611 - 18844.890: 98.3739% ( 7) 00:09:49.583 18844.890 - 18950.169: 98.4163% ( 6) 00:09:49.583 18950.169 - 19055.447: 98.4658% ( 7) 00:09:49.583 19055.447 - 19160.726: 98.5153% ( 7) 00:09:49.583 19160.726 - 19266.005: 98.5648% ( 7) 00:09:49.583 19266.005 - 19371.284: 98.6143% ( 7) 00:09:49.583 19371.284 - 19476.562: 98.6425% ( 4) 00:09:49.583 20739.907 - 20845.186: 98.6779% ( 5) 00:09:49.583 20845.186 - 20950.464: 98.7274% ( 7) 00:09:49.583 20950.464 - 21055.743: 98.7698% ( 6) 00:09:49.583 21055.743 - 21161.022: 98.8122% ( 6) 00:09:49.583 21161.022 - 21266.300: 98.8617% ( 7) 00:09:49.583 21266.300 - 21371.579: 98.9041% ( 6) 00:09:49.583 21371.579 - 21476.858: 98.9395% ( 5) 00:09:49.583 21476.858 - 21582.137: 98.9890% ( 7) 00:09:49.583 21582.137 - 21687.415: 99.0314% ( 6) 00:09:49.583 21687.415 - 21792.694: 99.0738% ( 6) 00:09:49.583 21792.694 - 21897.973: 99.0950% ( 3) 00:09:49.583 33478.631 - 33689.189: 99.1445% ( 7) 00:09:49.583 33689.189 - 33899.746: 99.2435% ( 14) 00:09:49.583 33899.746 - 34110.304: 99.3354% ( 13) 00:09:49.583 34110.304 - 34320.861: 99.4344% ( 14) 00:09:49.583 34320.861 - 34531.418: 99.5263% ( 13) 00:09:49.583 34531.418 - 34741.976: 99.5475% ( 3) 00:09:49.583 38953.124 - 39163.682: 99.5687% ( 3) 00:09:49.583 39163.682 - 39374.239: 99.6606% ( 13) 00:09:49.583 39374.239 - 39584.797: 99.7455% ( 12) 00:09:49.583 39584.797 - 39795.354: 99.8374% ( 13) 00:09:49.583 39795.354 - 40005.912: 99.9293% ( 13) 00:09:49.583 40005.912 - 40216.469: 100.0000% ( 10) 00:09:49.583 00:09:49.583 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:49.583 ============================================================================== 00:09:49.583 Range in us Cumulative IO count 00:09:49.583 4421.706 - 4448.026: 0.0141% ( 2) 00:09:49.583 4448.026 - 4474.345: 0.0354% ( 3) 00:09:49.583 4474.345 - 4500.665: 0.0636% ( 4) 00:09:49.583 4500.665 - 4526.985: 0.0707% ( 1) 00:09:49.583 4526.985 - 4553.304: 0.0778% ( 1) 00:09:49.583 4553.304 - 4579.624: 0.1061% ( 4) 00:09:49.583 4579.624 - 4605.944: 0.1202% ( 2) 00:09:49.583 4605.944 - 4632.263: 0.1414% ( 3) 00:09:49.583 4632.263 - 4658.583: 0.1555% ( 2) 00:09:49.583 4658.583 - 4684.903: 0.1697% ( 2) 00:09:49.583 4684.903 - 4711.222: 0.1838% ( 2) 00:09:49.583 4711.222 - 4737.542: 0.2050% ( 3) 00:09:49.583 4737.542 - 4763.862: 0.2192% ( 2) 00:09:49.583 4763.862 - 4790.182: 0.2404% ( 3) 00:09:49.583 4790.182 - 4816.501: 0.2545% ( 2) 00:09:49.583 4816.501 - 4842.821: 0.2757% ( 3) 00:09:49.583 4842.821 - 4869.141: 0.2899% ( 2) 00:09:49.583 4869.141 - 4895.460: 0.3040% ( 2) 00:09:49.583 4895.460 - 4921.780: 0.3182% ( 2) 00:09:49.583 4921.780 - 4948.100: 0.3252% ( 1) 00:09:49.583 4948.100 - 4974.419: 0.3464% ( 3) 00:09:49.583 4974.419 - 5000.739: 0.3606% ( 2) 00:09:49.583 5000.739 - 5027.059: 0.3818% ( 3) 00:09:49.583 5027.059 - 5053.378: 0.4030% ( 3) 00:09:49.583 5053.378 - 5079.698: 0.4171% ( 2) 00:09:49.583 5079.698 - 5106.018: 0.4383% ( 3) 00:09:49.583 5106.018 - 5132.337: 0.4525% ( 2) 00:09:49.583 7001.035 - 7053.674: 0.4737% ( 3) 00:09:49.583 7053.674 - 7106.313: 0.5161% ( 6) 00:09:49.583 7106.313 - 7158.953: 0.5303% ( 2) 00:09:49.583 7158.953 - 7211.592: 0.5585% ( 4) 00:09:49.583 7211.592 - 7264.231: 0.6010% ( 6) 00:09:49.583 7264.231 - 7316.871: 0.6363% ( 5) 00:09:49.583 7316.871 - 7369.510: 0.6787% ( 6) 00:09:49.583 7369.510 - 7422.149: 0.7141% ( 5) 00:09:49.583 7422.149 - 7474.789: 0.7494% ( 5) 00:09:49.583 7474.789 - 7527.428: 0.7777% ( 4) 00:09:49.583 7527.428 - 7580.067: 0.7989% ( 3) 00:09:49.583 7580.067 - 7632.707: 0.8272% ( 4) 00:09:49.583 7632.707 - 7685.346: 0.8626% ( 5) 00:09:49.583 7685.346 - 7737.986: 0.9403% ( 11) 00:09:49.583 7737.986 - 7790.625: 1.1736% ( 33) 00:09:49.583 7790.625 - 7843.264: 1.7251% ( 78) 00:09:49.583 7843.264 - 7895.904: 2.6230% ( 127) 00:09:49.583 7895.904 - 7948.543: 4.1431% ( 215) 00:09:49.583 7948.543 - 8001.182: 6.0450% ( 269) 00:09:49.583 8001.182 - 8053.822: 8.0317% ( 281) 00:09:49.583 8053.822 - 8106.461: 10.4426% ( 341) 00:09:49.583 8106.461 - 8159.100: 13.4898% ( 431) 00:09:49.583 8159.100 - 8211.740: 17.0178% ( 499) 00:09:49.583 8211.740 - 8264.379: 21.0266% ( 567) 00:09:49.583 8264.379 - 8317.018: 25.7282% ( 665) 00:09:49.583 8317.018 - 8369.658: 30.6632% ( 698) 00:09:49.583 8369.658 - 8422.297: 35.8173% ( 729) 00:09:49.583 8422.297 - 8474.937: 41.2048% ( 762) 00:09:49.583 8474.937 - 8527.576: 46.7619% ( 786) 00:09:49.583 8527.576 - 8580.215: 52.3190% ( 786) 00:09:49.583 8580.215 - 8632.855: 57.7135% ( 763) 00:09:49.583 8632.855 - 8685.494: 62.9101% ( 735) 00:09:49.583 8685.494 - 8738.133: 67.7531% ( 685) 00:09:49.583 8738.133 - 8790.773: 72.0659% ( 610) 00:09:49.583 8790.773 - 8843.412: 76.1524% ( 578) 00:09:49.583 8843.412 - 8896.051: 79.7582% ( 510) 00:09:49.583 8896.051 - 8948.691: 82.8479% ( 437) 00:09:49.583 8948.691 - 9001.330: 85.5840% ( 387) 00:09:49.583 9001.330 - 9053.969: 87.8111% ( 315) 00:09:49.583 9053.969 - 9106.609: 89.4231% ( 228) 00:09:49.583 9106.609 - 9159.248: 90.6038% ( 167) 00:09:49.583 9159.248 - 9211.888: 91.4027% ( 113) 00:09:49.583 9211.888 - 9264.527: 91.9118% ( 72) 00:09:49.583 9264.527 - 9317.166: 92.2441% ( 47) 00:09:49.583 9317.166 - 9369.806: 92.5057% ( 37) 00:09:49.583 9369.806 - 9422.445: 92.6824% ( 25) 00:09:49.583 9422.445 - 9475.084: 92.8167% ( 19) 00:09:49.583 9475.084 - 9527.724: 92.9087% ( 13) 00:09:49.583 9527.724 - 9580.363: 93.0006% ( 13) 00:09:49.583 9580.363 - 9633.002: 93.0925% ( 13) 00:09:49.583 9633.002 - 9685.642: 93.1773% ( 12) 00:09:49.583 9685.642 - 9738.281: 93.2551% ( 11) 00:09:49.583 9738.281 - 9790.920: 93.3258% ( 10) 00:09:49.583 9790.920 - 9843.560: 93.3611% ( 5) 00:09:49.583 9843.560 - 9896.199: 93.4318% ( 10) 00:09:49.583 9896.199 - 9948.839: 93.4813% ( 7) 00:09:49.583 9948.839 - 10001.478: 93.5238% ( 6) 00:09:49.583 10001.478 - 10054.117: 93.5874% ( 9) 00:09:49.583 10054.117 - 10106.757: 93.6439% ( 8) 00:09:49.583 10106.757 - 10159.396: 93.6934% ( 7) 00:09:49.583 10159.396 - 10212.035: 93.7500% ( 8) 00:09:49.583 10212.035 - 10264.675: 93.7995% ( 7) 00:09:49.583 10264.675 - 10317.314: 93.8490% ( 7) 00:09:49.583 10317.314 - 10369.953: 93.9055% ( 8) 00:09:49.583 10369.953 - 10422.593: 93.9550% ( 7) 00:09:49.583 10422.593 - 10475.232: 94.0045% ( 7) 00:09:49.583 10475.232 - 10527.871: 94.0752% ( 10) 00:09:49.583 10527.871 - 10580.511: 94.1601% ( 12) 00:09:49.583 10580.511 - 10633.150: 94.2166% ( 8) 00:09:49.583 10633.150 - 10685.790: 94.2873% ( 10) 00:09:49.584 10685.790 - 10738.429: 94.3510% ( 9) 00:09:49.584 10738.429 - 10791.068: 94.4146% ( 9) 00:09:49.584 10791.068 - 10843.708: 94.4853% ( 10) 00:09:49.584 10843.708 - 10896.347: 94.5489% ( 9) 00:09:49.584 10896.347 - 10948.986: 94.6338% ( 12) 00:09:49.584 10948.986 - 11001.626: 94.7257% ( 13) 00:09:49.584 11001.626 - 11054.265: 94.8317% ( 15) 00:09:49.584 11054.265 - 11106.904: 94.9024% ( 10) 00:09:49.584 11106.904 - 11159.544: 94.9802% ( 11) 00:09:49.584 11159.544 - 11212.183: 95.0509% ( 10) 00:09:49.584 11212.183 - 11264.822: 95.1287% ( 11) 00:09:49.584 11264.822 - 11317.462: 95.1994% ( 10) 00:09:49.584 11317.462 - 11370.101: 95.2913% ( 13) 00:09:49.584 11370.101 - 11422.741: 95.3479% ( 8) 00:09:49.584 11422.741 - 11475.380: 95.4044% ( 8) 00:09:49.584 11475.380 - 11528.019: 95.4468% ( 6) 00:09:49.584 11528.019 - 11580.659: 95.4963% ( 7) 00:09:49.584 11580.659 - 11633.298: 95.5387% ( 6) 00:09:49.584 11633.298 - 11685.937: 95.5812% ( 6) 00:09:49.584 11685.937 - 11738.577: 95.6236% ( 6) 00:09:49.584 11738.577 - 11791.216: 95.6448% ( 3) 00:09:49.584 11791.216 - 11843.855: 95.6660% ( 3) 00:09:49.584 11843.855 - 11896.495: 95.6872% ( 3) 00:09:49.584 11896.495 - 11949.134: 95.7084% ( 3) 00:09:49.584 11949.134 - 12001.773: 95.7367% ( 4) 00:09:49.584 12001.773 - 12054.413: 95.7791% ( 6) 00:09:49.584 12054.413 - 12107.052: 95.8145% ( 5) 00:09:49.584 12107.052 - 12159.692: 95.8498% ( 5) 00:09:49.584 12159.692 - 12212.331: 95.8852% ( 5) 00:09:49.584 12212.331 - 12264.970: 95.9205% ( 5) 00:09:49.584 12264.970 - 12317.610: 95.9559% ( 5) 00:09:49.584 12317.610 - 12370.249: 95.9912% ( 5) 00:09:49.584 12370.249 - 12422.888: 96.0266% ( 5) 00:09:49.584 12422.888 - 12475.528: 96.0690% ( 6) 00:09:49.584 12475.528 - 12528.167: 96.0973% ( 4) 00:09:49.584 12528.167 - 12580.806: 96.1185% ( 3) 00:09:49.584 12580.806 - 12633.446: 96.1326% ( 2) 00:09:49.584 12633.446 - 12686.085: 96.1468% ( 2) 00:09:49.584 12686.085 - 12738.724: 96.1680% ( 3) 00:09:49.584 12738.724 - 12791.364: 96.1821% ( 2) 00:09:49.584 12791.364 - 12844.003: 96.1963% ( 2) 00:09:49.584 12844.003 - 12896.643: 96.2104% ( 2) 00:09:49.584 12896.643 - 12949.282: 96.2316% ( 3) 00:09:49.584 12949.282 - 13001.921: 96.2458% ( 2) 00:09:49.584 13001.921 - 13054.561: 96.2599% ( 2) 00:09:49.584 13054.561 - 13107.200: 96.2811% ( 3) 00:09:49.584 13107.200 - 13159.839: 96.2952% ( 2) 00:09:49.584 13159.839 - 13212.479: 96.3165% ( 3) 00:09:49.584 13212.479 - 13265.118: 96.3306% ( 2) 00:09:49.584 13265.118 - 13317.757: 96.3377% ( 1) 00:09:49.584 13317.757 - 13370.397: 96.3589% ( 3) 00:09:49.584 13370.397 - 13423.036: 96.3730% ( 2) 00:09:49.584 13423.036 - 13475.676: 96.3942% ( 3) 00:09:49.584 13475.676 - 13580.954: 96.4296% ( 5) 00:09:49.584 13580.954 - 13686.233: 96.4649% ( 5) 00:09:49.584 13686.233 - 13791.512: 96.5003% ( 5) 00:09:49.584 13791.512 - 13896.790: 96.5427% ( 6) 00:09:49.584 13896.790 - 14002.069: 96.5781% ( 5) 00:09:49.584 14002.069 - 14107.348: 96.6134% ( 5) 00:09:49.584 14107.348 - 14212.627: 96.6488% ( 5) 00:09:49.584 14212.627 - 14317.905: 96.6841% ( 5) 00:09:49.584 14317.905 - 14423.184: 96.7195% ( 5) 00:09:49.584 14423.184 - 14528.463: 96.7548% ( 5) 00:09:49.584 14528.463 - 14633.741: 96.7902% ( 5) 00:09:49.584 14633.741 - 14739.020: 96.8184% ( 4) 00:09:49.584 14739.020 - 14844.299: 96.8326% ( 2) 00:09:49.584 15054.856 - 15160.135: 96.8750% ( 6) 00:09:49.584 15160.135 - 15265.414: 96.9386% ( 9) 00:09:49.584 15265.414 - 15370.692: 97.0235% ( 12) 00:09:49.584 15370.692 - 15475.971: 97.1154% ( 13) 00:09:49.584 15475.971 - 15581.250: 97.2073% ( 13) 00:09:49.584 15581.250 - 15686.529: 97.2851% ( 11) 00:09:49.584 15686.529 - 15791.807: 97.3699% ( 12) 00:09:49.584 15791.807 - 15897.086: 97.4618% ( 13) 00:09:49.584 15897.086 - 16002.365: 97.5396% ( 11) 00:09:49.584 16002.365 - 16107.643: 97.6174% ( 11) 00:09:49.584 16107.643 - 16212.922: 97.6527% ( 5) 00:09:49.584 16212.922 - 16318.201: 97.6951% ( 6) 00:09:49.584 16318.201 - 16423.480: 97.7588% ( 9) 00:09:49.584 16423.480 - 16528.758: 97.8224% ( 9) 00:09:49.584 16528.758 - 16634.037: 97.9214% ( 14) 00:09:49.584 16634.037 - 16739.316: 98.0133% ( 13) 00:09:49.584 16739.316 - 16844.594: 98.1123% ( 14) 00:09:49.584 16844.594 - 16949.873: 98.2113% ( 14) 00:09:49.584 16949.873 - 17055.152: 98.3102% ( 14) 00:09:49.584 17055.152 - 17160.431: 98.4092% ( 14) 00:09:49.584 17160.431 - 17265.709: 98.5082% ( 14) 00:09:49.584 17265.709 - 17370.988: 98.6072% ( 14) 00:09:49.584 17370.988 - 17476.267: 98.6425% ( 5) 00:09:49.584 18002.660 - 18107.939: 98.6496% ( 1) 00:09:49.584 18107.939 - 18213.218: 98.6850% ( 5) 00:09:49.584 18213.218 - 18318.496: 98.7274% ( 6) 00:09:49.584 18318.496 - 18423.775: 98.7698% ( 6) 00:09:49.584 18423.775 - 18529.054: 98.8193% ( 7) 00:09:49.584 18529.054 - 18634.333: 98.8546% ( 5) 00:09:49.584 18634.333 - 18739.611: 98.8971% ( 6) 00:09:49.584 18739.611 - 18844.890: 98.9465% ( 7) 00:09:49.584 18844.890 - 18950.169: 98.9890% ( 6) 00:09:49.584 18950.169 - 19055.447: 99.0314% ( 6) 00:09:49.584 19055.447 - 19160.726: 99.0809% ( 7) 00:09:49.584 19160.726 - 19266.005: 99.0950% ( 2) 00:09:49.584 33057.516 - 33268.074: 99.1657% ( 10) 00:09:49.584 33268.074 - 33478.631: 99.2647% ( 14) 00:09:49.584 33478.631 - 33689.189: 99.3566% ( 13) 00:09:49.584 33689.189 - 33899.746: 99.4556% ( 14) 00:09:49.584 33899.746 - 34110.304: 99.5404% ( 12) 00:09:49.584 34110.304 - 34320.861: 99.5475% ( 1) 00:09:49.584 38532.010 - 38742.567: 99.5829% ( 5) 00:09:49.584 38742.567 - 38953.124: 99.6748% ( 13) 00:09:49.584 38953.124 - 39163.682: 99.7667% ( 13) 00:09:49.584 39163.682 - 39374.239: 99.8657% ( 14) 00:09:49.584 39374.239 - 39584.797: 99.9576% ( 13) 00:09:49.584 39584.797 - 39795.354: 100.0000% ( 6) 00:09:49.584 00:09:49.584 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:49.584 ============================================================================== 00:09:49.584 Range in us Cumulative IO count 00:09:49.584 4105.870 - 4132.190: 0.0141% ( 2) 00:09:49.584 4132.190 - 4158.509: 0.0283% ( 2) 00:09:49.584 4158.509 - 4184.829: 0.0495% ( 3) 00:09:49.584 4184.829 - 4211.149: 0.0636% ( 2) 00:09:49.584 4211.149 - 4237.468: 0.0848% ( 3) 00:09:49.584 4237.468 - 4263.788: 0.0990% ( 2) 00:09:49.584 4263.788 - 4290.108: 0.1202% ( 3) 00:09:49.584 4290.108 - 4316.427: 0.1414% ( 3) 00:09:49.584 4316.427 - 4342.747: 0.1555% ( 2) 00:09:49.584 4342.747 - 4369.067: 0.1768% ( 3) 00:09:49.584 4369.067 - 4395.386: 0.1909% ( 2) 00:09:49.584 4395.386 - 4421.706: 0.2121% ( 3) 00:09:49.584 4421.706 - 4448.026: 0.2262% ( 2) 00:09:49.584 4448.026 - 4474.345: 0.2475% ( 3) 00:09:49.584 4474.345 - 4500.665: 0.2687% ( 3) 00:09:49.584 4500.665 - 4526.985: 0.2828% ( 2) 00:09:49.584 4526.985 - 4553.304: 0.3040% ( 3) 00:09:49.584 4553.304 - 4579.624: 0.3182% ( 2) 00:09:49.584 4579.624 - 4605.944: 0.3394% ( 3) 00:09:49.584 4605.944 - 4632.263: 0.3606% ( 3) 00:09:49.584 4632.263 - 4658.583: 0.3747% ( 2) 00:09:49.584 4658.583 - 4684.903: 0.3889% ( 2) 00:09:49.584 4684.903 - 4711.222: 0.4101% ( 3) 00:09:49.584 4711.222 - 4737.542: 0.4242% ( 2) 00:09:49.584 4737.542 - 4763.862: 0.4454% ( 3) 00:09:49.584 4763.862 - 4790.182: 0.4525% ( 1) 00:09:49.584 6737.838 - 6790.477: 0.5232% ( 10) 00:09:49.584 6790.477 - 6843.116: 0.5373% ( 2) 00:09:49.584 6843.116 - 6895.756: 0.5727% ( 5) 00:09:49.584 6895.756 - 6948.395: 0.6080% ( 5) 00:09:49.584 6948.395 - 7001.035: 0.6505% ( 6) 00:09:49.584 7001.035 - 7053.674: 0.6858% ( 5) 00:09:49.584 7053.674 - 7106.313: 0.7212% ( 5) 00:09:49.584 7106.313 - 7158.953: 0.7565% ( 5) 00:09:49.585 7158.953 - 7211.592: 0.7848% ( 4) 00:09:49.585 7211.592 - 7264.231: 0.8272% ( 6) 00:09:49.585 7264.231 - 7316.871: 0.8555% ( 4) 00:09:49.585 7316.871 - 7369.510: 0.8838% ( 4) 00:09:49.585 7369.510 - 7422.149: 0.9050% ( 3) 00:09:49.585 7685.346 - 7737.986: 0.9403% ( 5) 00:09:49.585 7737.986 - 7790.625: 1.2656% ( 46) 00:09:49.585 7790.625 - 7843.264: 1.9301% ( 94) 00:09:49.585 7843.264 - 7895.904: 2.7220% ( 112) 00:09:49.585 7895.904 - 7948.543: 4.0158% ( 183) 00:09:49.585 7948.543 - 8001.182: 6.0591% ( 289) 00:09:49.585 8001.182 - 8053.822: 8.2084% ( 304) 00:09:49.585 8053.822 - 8106.461: 10.7042% ( 353) 00:09:49.585 8106.461 - 8159.100: 13.7090% ( 425) 00:09:49.585 8159.100 - 8211.740: 17.2016% ( 494) 00:09:49.585 8211.740 - 8264.379: 21.2882% ( 578) 00:09:49.585 8264.379 - 8317.018: 25.7141% ( 626) 00:09:49.585 8317.018 - 8369.658: 30.6985% ( 705) 00:09:49.585 8369.658 - 8422.297: 36.1355% ( 769) 00:09:49.585 8422.297 - 8474.937: 41.4734% ( 755) 00:09:49.585 8474.937 - 8527.576: 47.0305% ( 786) 00:09:49.585 8527.576 - 8580.215: 52.6089% ( 789) 00:09:49.585 8580.215 - 8632.855: 58.1024% ( 777) 00:09:49.585 8632.855 - 8685.494: 63.1646% ( 716) 00:09:49.585 8685.494 - 8738.133: 67.9581% ( 678) 00:09:49.585 8738.133 - 8790.773: 72.1719% ( 596) 00:09:49.585 8790.773 - 8843.412: 76.0110% ( 543) 00:09:49.585 8843.412 - 8896.051: 79.6663% ( 517) 00:09:49.585 8896.051 - 8948.691: 82.9044% ( 458) 00:09:49.585 8948.691 - 9001.330: 85.6406% ( 387) 00:09:49.585 9001.330 - 9053.969: 87.8323% ( 310) 00:09:49.585 9053.969 - 9106.609: 89.4867% ( 234) 00:09:49.585 9106.609 - 9159.248: 90.7098% ( 173) 00:09:49.585 9159.248 - 9211.888: 91.5441% ( 118) 00:09:49.585 9211.888 - 9264.527: 92.1239% ( 82) 00:09:49.585 9264.527 - 9317.166: 92.4279% ( 43) 00:09:49.585 9317.166 - 9369.806: 92.6329% ( 29) 00:09:49.585 9369.806 - 9422.445: 92.8026% ( 24) 00:09:49.585 9422.445 - 9475.084: 92.9087% ( 15) 00:09:49.585 9475.084 - 9527.724: 93.0076% ( 14) 00:09:49.585 9527.724 - 9580.363: 93.1066% ( 14) 00:09:49.585 9580.363 - 9633.002: 93.2056% ( 14) 00:09:49.585 9633.002 - 9685.642: 93.2975% ( 13) 00:09:49.585 9685.642 - 9738.281: 93.3470% ( 7) 00:09:49.585 9738.281 - 9790.920: 93.4036% ( 8) 00:09:49.585 9790.920 - 9843.560: 93.4531% ( 7) 00:09:49.585 9843.560 - 9896.199: 93.5167% ( 9) 00:09:49.585 9896.199 - 9948.839: 93.5803% ( 9) 00:09:49.585 9948.839 - 10001.478: 93.6227% ( 6) 00:09:49.585 10001.478 - 10054.117: 93.6864% ( 9) 00:09:49.585 10054.117 - 10106.757: 93.7500% ( 9) 00:09:49.585 10106.757 - 10159.396: 93.7924% ( 6) 00:09:49.585 10159.396 - 10212.035: 93.8490% ( 8) 00:09:49.585 10212.035 - 10264.675: 93.8985% ( 7) 00:09:49.585 10264.675 - 10317.314: 93.9480% ( 7) 00:09:49.585 10317.314 - 10369.953: 94.0116% ( 9) 00:09:49.585 10369.953 - 10422.593: 94.0682% ( 8) 00:09:49.585 10422.593 - 10475.232: 94.1176% ( 7) 00:09:49.585 10475.232 - 10527.871: 94.1601% ( 6) 00:09:49.585 10527.871 - 10580.511: 94.1883% ( 4) 00:09:49.585 10580.511 - 10633.150: 94.2308% ( 6) 00:09:49.585 10633.150 - 10685.790: 94.2661% ( 5) 00:09:49.585 10685.790 - 10738.429: 94.3015% ( 5) 00:09:49.585 10738.429 - 10791.068: 94.3439% ( 6) 00:09:49.585 10791.068 - 10843.708: 94.3792% ( 5) 00:09:49.585 10843.708 - 10896.347: 94.4146% ( 5) 00:09:49.585 10896.347 - 10948.986: 94.4499% ( 5) 00:09:49.585 10948.986 - 11001.626: 94.4853% ( 5) 00:09:49.585 11001.626 - 11054.265: 94.4994% ( 2) 00:09:49.585 11054.265 - 11106.904: 94.5348% ( 5) 00:09:49.585 11106.904 - 11159.544: 94.5701% ( 5) 00:09:49.585 11159.544 - 11212.183: 94.6126% ( 6) 00:09:49.585 11212.183 - 11264.822: 94.6903% ( 11) 00:09:49.585 11264.822 - 11317.462: 94.7327% ( 6) 00:09:49.585 11317.462 - 11370.101: 94.7822% ( 7) 00:09:49.585 11370.101 - 11422.741: 94.8317% ( 7) 00:09:49.585 11422.741 - 11475.380: 94.8742% ( 6) 00:09:49.585 11475.380 - 11528.019: 94.9166% ( 6) 00:09:49.585 11528.019 - 11580.659: 94.9661% ( 7) 00:09:49.585 11580.659 - 11633.298: 95.0156% ( 7) 00:09:49.585 11633.298 - 11685.937: 95.0863% ( 10) 00:09:49.585 11685.937 - 11738.577: 95.1782% ( 13) 00:09:49.585 11738.577 - 11791.216: 95.2842% ( 15) 00:09:49.585 11791.216 - 11843.855: 95.3832% ( 14) 00:09:49.585 11843.855 - 11896.495: 95.4751% ( 13) 00:09:49.585 11896.495 - 11949.134: 95.5600% ( 12) 00:09:49.585 11949.134 - 12001.773: 95.6519% ( 13) 00:09:49.585 12001.773 - 12054.413: 95.7438% ( 13) 00:09:49.585 12054.413 - 12107.052: 95.8215% ( 11) 00:09:49.585 12107.052 - 12159.692: 95.8993% ( 11) 00:09:49.585 12159.692 - 12212.331: 95.9559% ( 8) 00:09:49.585 12212.331 - 12264.970: 96.0195% ( 9) 00:09:49.585 12264.970 - 12317.610: 96.0619% ( 6) 00:09:49.585 12317.610 - 12370.249: 96.1114% ( 7) 00:09:49.585 12370.249 - 12422.888: 96.1538% ( 6) 00:09:49.585 12422.888 - 12475.528: 96.2033% ( 7) 00:09:49.585 12475.528 - 12528.167: 96.2175% ( 2) 00:09:49.585 12528.167 - 12580.806: 96.2387% ( 3) 00:09:49.585 12580.806 - 12633.446: 96.2528% ( 2) 00:09:49.585 12633.446 - 12686.085: 96.2670% ( 2) 00:09:49.585 12686.085 - 12738.724: 96.2882% ( 3) 00:09:49.585 12738.724 - 12791.364: 96.3023% ( 2) 00:09:49.585 12791.364 - 12844.003: 96.3165% ( 2) 00:09:49.585 12844.003 - 12896.643: 96.3306% ( 2) 00:09:49.585 12896.643 - 12949.282: 96.3447% ( 2) 00:09:49.585 12949.282 - 13001.921: 96.3589% ( 2) 00:09:49.585 13001.921 - 13054.561: 96.3801% ( 3) 00:09:49.585 13054.561 - 13107.200: 96.4013% ( 3) 00:09:49.585 13107.200 - 13159.839: 96.4084% ( 1) 00:09:49.585 13159.839 - 13212.479: 96.4296% ( 3) 00:09:49.585 13212.479 - 13265.118: 96.4508% ( 3) 00:09:49.585 13265.118 - 13317.757: 96.4720% ( 3) 00:09:49.585 13317.757 - 13370.397: 96.4861% ( 2) 00:09:49.585 13370.397 - 13423.036: 96.5074% ( 3) 00:09:49.585 13423.036 - 13475.676: 96.5286% ( 3) 00:09:49.585 13475.676 - 13580.954: 96.5639% ( 5) 00:09:49.585 13580.954 - 13686.233: 96.6063% ( 6) 00:09:49.585 13686.233 - 13791.512: 96.6417% ( 5) 00:09:49.585 13791.512 - 13896.790: 96.6700% ( 4) 00:09:49.585 13896.790 - 14002.069: 96.7124% ( 6) 00:09:49.585 14002.069 - 14107.348: 96.7477% ( 5) 00:09:49.585 14107.348 - 14212.627: 96.7831% ( 5) 00:09:49.585 14212.627 - 14317.905: 96.8184% ( 5) 00:09:49.585 14317.905 - 14423.184: 96.8326% ( 2) 00:09:49.585 14528.463 - 14633.741: 96.8538% ( 3) 00:09:49.585 14633.741 - 14739.020: 96.8891% ( 5) 00:09:49.585 14739.020 - 14844.299: 96.9386% ( 7) 00:09:49.585 14844.299 - 14949.578: 96.9740% ( 5) 00:09:49.585 14949.578 - 15054.856: 97.0093% ( 5) 00:09:49.585 15054.856 - 15160.135: 97.0447% ( 5) 00:09:49.585 15160.135 - 15265.414: 97.0871% ( 6) 00:09:49.585 15265.414 - 15370.692: 97.1154% ( 4) 00:09:49.585 15370.692 - 15475.971: 97.1578% ( 6) 00:09:49.585 15475.971 - 15581.250: 97.1932% ( 5) 00:09:49.585 15581.250 - 15686.529: 97.2356% ( 6) 00:09:49.585 15686.529 - 15791.807: 97.2709% ( 5) 00:09:49.585 15791.807 - 15897.086: 97.3699% ( 14) 00:09:49.585 15897.086 - 16002.365: 97.4335% ( 9) 00:09:49.585 16002.365 - 16107.643: 97.5255% ( 13) 00:09:49.585 16107.643 - 16212.922: 97.6174% ( 13) 00:09:49.585 16212.922 - 16318.201: 97.7163% ( 14) 00:09:49.585 16318.201 - 16423.480: 97.8153% ( 14) 00:09:49.585 16423.480 - 16528.758: 97.9002% ( 12) 00:09:49.585 16528.758 - 16634.037: 97.9992% ( 14) 00:09:49.585 16634.037 - 16739.316: 98.0911% ( 13) 00:09:49.585 16739.316 - 16844.594: 98.1406% ( 7) 00:09:49.585 16844.594 - 16949.873: 98.1900% ( 7) 00:09:49.585 17265.709 - 17370.988: 98.2113% ( 3) 00:09:49.585 17370.988 - 17476.267: 98.2678% ( 8) 00:09:49.585 17476.267 - 17581.545: 98.3244% ( 8) 00:09:49.585 17581.545 - 17686.824: 98.3739% ( 7) 00:09:49.585 17686.824 - 17792.103: 98.4092% ( 5) 00:09:49.585 17792.103 - 17897.382: 98.4799% ( 10) 00:09:49.585 17897.382 - 18002.660: 98.5577% ( 11) 00:09:49.585 18002.660 - 18107.939: 98.6567% ( 14) 00:09:49.585 18107.939 - 18213.218: 98.7415% ( 12) 00:09:49.585 18213.218 - 18318.496: 98.8193% ( 11) 00:09:49.585 18318.496 - 18423.775: 98.8617% ( 6) 00:09:49.585 18423.775 - 18529.054: 98.9041% ( 6) 00:09:49.585 18529.054 - 18634.333: 98.9536% ( 7) 00:09:49.585 18634.333 - 18739.611: 98.9960% ( 6) 00:09:49.585 18739.611 - 18844.890: 99.0385% ( 6) 00:09:49.585 18844.890 - 18950.169: 99.0880% ( 7) 00:09:49.585 18950.169 - 19055.447: 99.0950% ( 1) 00:09:49.585 32636.402 - 32846.959: 99.1799% ( 12) 00:09:49.585 32846.959 - 33057.516: 99.2718% ( 13) 00:09:49.585 33057.516 - 33268.074: 99.3637% ( 13) 00:09:49.585 33268.074 - 33478.631: 99.4556% ( 13) 00:09:49.585 33478.631 - 33689.189: 99.5475% ( 13) 00:09:49.585 38110.895 - 38321.452: 99.6182% ( 10) 00:09:49.585 38321.452 - 38532.010: 99.7101% ( 13) 00:09:49.585 38532.010 - 38742.567: 99.8020% ( 13) 00:09:49.585 38742.567 - 38953.124: 99.8939% ( 13) 00:09:49.585 38953.124 - 39163.682: 99.9859% ( 13) 00:09:49.585 39163.682 - 39374.239: 100.0000% ( 2) 00:09:49.585 00:09:49.585 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:49.585 ============================================================================== 00:09:49.585 Range in us Cumulative IO count 00:09:49.585 3711.075 - 3737.394: 0.0211% ( 3) 00:09:49.585 3737.394 - 3763.714: 0.0422% ( 3) 00:09:49.586 3763.714 - 3790.034: 0.0563% ( 2) 00:09:49.586 3790.034 - 3816.353: 0.0704% ( 2) 00:09:49.586 3816.353 - 3842.673: 0.0985% ( 4) 00:09:49.586 3842.673 - 3868.993: 0.1126% ( 2) 00:09:49.586 3868.993 - 3895.312: 0.1337% ( 3) 00:09:49.586 3895.312 - 3921.632: 0.1408% ( 1) 00:09:49.586 3921.632 - 3947.952: 0.1548% ( 2) 00:09:49.586 3947.952 - 3974.271: 0.1689% ( 2) 00:09:49.586 3974.271 - 4000.591: 0.1830% ( 2) 00:09:49.586 4000.591 - 4026.911: 0.1971% ( 2) 00:09:49.586 4026.911 - 4053.231: 0.2111% ( 2) 00:09:49.586 4053.231 - 4079.550: 0.2252% ( 2) 00:09:49.586 4079.550 - 4105.870: 0.2463% ( 3) 00:09:49.586 4105.870 - 4132.190: 0.2604% ( 2) 00:09:49.586 4132.190 - 4158.509: 0.2815% ( 3) 00:09:49.586 4158.509 - 4184.829: 0.2956% ( 2) 00:09:49.586 4184.829 - 4211.149: 0.3167% ( 3) 00:09:49.586 4211.149 - 4237.468: 0.3308% ( 2) 00:09:49.586 4237.468 - 4263.788: 0.3519% ( 3) 00:09:49.586 4263.788 - 4290.108: 0.3660% ( 2) 00:09:49.586 4290.108 - 4316.427: 0.3871% ( 3) 00:09:49.586 4316.427 - 4342.747: 0.4082% ( 3) 00:09:49.586 4342.747 - 4369.067: 0.4223% ( 2) 00:09:49.586 4369.067 - 4395.386: 0.4434% ( 3) 00:09:49.586 4395.386 - 4421.706: 0.4505% ( 1) 00:09:49.586 6395.682 - 6422.002: 0.4997% ( 7) 00:09:49.586 6422.002 - 6448.321: 0.5138% ( 2) 00:09:49.586 6448.321 - 6474.641: 0.5208% ( 1) 00:09:49.586 6474.641 - 6500.961: 0.5279% ( 1) 00:09:49.586 6500.961 - 6527.280: 0.5419% ( 2) 00:09:49.586 6527.280 - 6553.600: 0.5631% ( 3) 00:09:49.586 6553.600 - 6579.920: 0.5771% ( 2) 00:09:49.586 6579.920 - 6606.239: 0.6264% ( 7) 00:09:49.586 6606.239 - 6632.559: 0.6405% ( 2) 00:09:49.586 6632.559 - 6658.879: 0.6616% ( 3) 00:09:49.586 6658.879 - 6685.198: 0.6686% ( 1) 00:09:49.586 6685.198 - 6711.518: 0.6827% ( 2) 00:09:49.586 6711.518 - 6737.838: 0.7038% ( 3) 00:09:49.586 6737.838 - 6790.477: 0.7390% ( 5) 00:09:49.586 6790.477 - 6843.116: 0.7672% ( 4) 00:09:49.586 6843.116 - 6895.756: 0.8024% ( 5) 00:09:49.586 6895.756 - 6948.395: 0.8376% ( 5) 00:09:49.586 6948.395 - 7001.035: 0.8727% ( 5) 00:09:49.586 7001.035 - 7053.674: 0.9009% ( 4) 00:09:49.586 7685.346 - 7737.986: 0.9713% ( 10) 00:09:49.586 7737.986 - 7790.625: 1.1613% ( 27) 00:09:49.586 7790.625 - 7843.264: 1.6188% ( 65) 00:09:49.586 7843.264 - 7895.904: 2.5760% ( 136) 00:09:49.586 7895.904 - 7948.543: 3.8640% ( 183) 00:09:49.586 7948.543 - 8001.182: 6.0529% ( 311) 00:09:49.586 8001.182 - 8053.822: 8.0940% ( 290) 00:09:49.586 8053.822 - 8106.461: 10.5504% ( 349) 00:09:49.586 8106.461 - 8159.100: 13.6050% ( 434) 00:09:49.586 8159.100 - 8211.740: 16.9623% ( 477) 00:09:49.586 8211.740 - 8264.379: 21.2204% ( 605) 00:09:49.586 8264.379 - 8317.018: 25.7461% ( 643) 00:09:49.586 8317.018 - 8369.658: 30.5743% ( 686) 00:09:49.586 8369.658 - 8422.297: 36.0149% ( 773) 00:09:49.586 8422.297 - 8474.937: 41.4485% ( 772) 00:09:49.586 8474.937 - 8527.576: 46.9383% ( 780) 00:09:49.586 8527.576 - 8580.215: 52.4845% ( 788) 00:09:49.586 8580.215 - 8632.855: 57.8266% ( 759) 00:09:49.586 8632.855 - 8685.494: 63.0138% ( 737) 00:09:49.586 8685.494 - 8738.133: 67.7506% ( 673) 00:09:49.586 8738.133 - 8790.773: 72.0791% ( 615) 00:09:49.586 8790.773 - 8843.412: 76.0135% ( 559) 00:09:49.586 8843.412 - 8896.051: 79.5678% ( 505) 00:09:49.586 8896.051 - 8948.691: 82.6928% ( 444) 00:09:49.586 8948.691 - 9001.330: 85.3885% ( 383) 00:09:49.586 9001.330 - 9053.969: 87.5985% ( 314) 00:09:49.586 9053.969 - 9106.609: 89.1962% ( 227) 00:09:49.586 9106.609 - 9159.248: 90.5335% ( 190) 00:09:49.586 9159.248 - 9211.888: 91.3570% ( 117) 00:09:49.586 9211.888 - 9264.527: 91.8637% ( 72) 00:09:49.586 9264.527 - 9317.166: 92.1382% ( 39) 00:09:49.586 9317.166 - 9369.806: 92.3423% ( 29) 00:09:49.586 9369.806 - 9422.445: 92.5605% ( 31) 00:09:49.586 9422.445 - 9475.084: 92.7224% ( 23) 00:09:49.586 9475.084 - 9527.724: 92.8773% ( 22) 00:09:49.586 9527.724 - 9580.363: 93.0180% ( 20) 00:09:49.586 9580.363 - 9633.002: 93.1447% ( 18) 00:09:49.586 9633.002 - 9685.642: 93.2503% ( 15) 00:09:49.586 9685.642 - 9738.281: 93.3699% ( 17) 00:09:49.586 9738.281 - 9790.920: 93.4755% ( 15) 00:09:49.586 9790.920 - 9843.560: 93.5600% ( 12) 00:09:49.586 9843.560 - 9896.199: 93.6585% ( 14) 00:09:49.586 9896.199 - 9948.839: 93.7430% ( 12) 00:09:49.586 9948.839 - 10001.478: 93.8204% ( 11) 00:09:49.586 10001.478 - 10054.117: 93.8978% ( 11) 00:09:49.586 10054.117 - 10106.757: 93.9752% ( 11) 00:09:49.586 10106.757 - 10159.396: 94.0386% ( 9) 00:09:49.586 10159.396 - 10212.035: 94.0878% ( 7) 00:09:49.586 10212.035 - 10264.675: 94.1441% ( 8) 00:09:49.586 10264.675 - 10317.314: 94.1864% ( 6) 00:09:49.586 10317.314 - 10369.953: 94.2356% ( 7) 00:09:49.586 10369.953 - 10422.593: 94.2779% ( 6) 00:09:49.586 10422.593 - 10475.232: 94.3060% ( 4) 00:09:49.586 10475.232 - 10527.871: 94.3412% ( 5) 00:09:49.586 10527.871 - 10580.511: 94.3694% ( 4) 00:09:49.586 10580.511 - 10633.150: 94.4046% ( 5) 00:09:49.586 10633.150 - 10685.790: 94.4468% ( 6) 00:09:49.586 10685.790 - 10738.429: 94.4749% ( 4) 00:09:49.586 10738.429 - 10791.068: 94.4961% ( 3) 00:09:49.586 10791.068 - 10843.708: 94.5172% ( 3) 00:09:49.586 10843.708 - 10896.347: 94.5383% ( 3) 00:09:49.586 10896.347 - 10948.986: 94.5805% ( 6) 00:09:49.586 10948.986 - 11001.626: 94.6087% ( 4) 00:09:49.586 11001.626 - 11054.265: 94.6439% ( 5) 00:09:49.586 11054.265 - 11106.904: 94.6720% ( 4) 00:09:49.586 11106.904 - 11159.544: 94.6931% ( 3) 00:09:49.586 11159.544 - 11212.183: 94.7072% ( 2) 00:09:49.586 11212.183 - 11264.822: 94.7283% ( 3) 00:09:49.586 11264.822 - 11317.462: 94.7424% ( 2) 00:09:49.586 11317.462 - 11370.101: 94.7776% ( 5) 00:09:49.586 11370.101 - 11422.741: 94.8057% ( 4) 00:09:49.586 11422.741 - 11475.380: 94.8480% ( 6) 00:09:49.586 11475.380 - 11528.019: 94.8832% ( 5) 00:09:49.586 11528.019 - 11580.659: 94.9254% ( 6) 00:09:49.586 11580.659 - 11633.298: 94.9606% ( 5) 00:09:49.586 11633.298 - 11685.937: 94.9958% ( 5) 00:09:49.586 11685.937 - 11738.577: 95.0310% ( 5) 00:09:49.586 11738.577 - 11791.216: 95.0591% ( 4) 00:09:49.586 11791.216 - 11843.855: 95.1014% ( 6) 00:09:49.586 11843.855 - 11896.495: 95.1365% ( 5) 00:09:49.586 11896.495 - 11949.134: 95.1788% ( 6) 00:09:49.586 11949.134 - 12001.773: 95.2140% ( 5) 00:09:49.586 12001.773 - 12054.413: 95.2562% ( 6) 00:09:49.586 12054.413 - 12107.052: 95.2843% ( 4) 00:09:49.586 12107.052 - 12159.692: 95.3125% ( 4) 00:09:49.586 12159.692 - 12212.331: 95.3899% ( 11) 00:09:49.586 12212.331 - 12264.970: 95.4181% ( 4) 00:09:49.586 12264.970 - 12317.610: 95.4673% ( 7) 00:09:49.586 12317.610 - 12370.249: 95.5025% ( 5) 00:09:49.586 12370.249 - 12422.888: 95.5518% ( 7) 00:09:49.586 12422.888 - 12475.528: 95.5940% ( 6) 00:09:49.586 12475.528 - 12528.167: 95.6433% ( 7) 00:09:49.586 12528.167 - 12580.806: 95.7418% ( 14) 00:09:49.586 12580.806 - 12633.446: 95.7911% ( 7) 00:09:49.586 12633.446 - 12686.085: 95.8685% ( 11) 00:09:49.586 12686.085 - 12738.724: 95.9319% ( 9) 00:09:49.586 12738.724 - 12791.364: 96.0093% ( 11) 00:09:49.586 12791.364 - 12844.003: 96.0726% ( 9) 00:09:49.586 12844.003 - 12896.643: 96.1501% ( 11) 00:09:49.586 12896.643 - 12949.282: 96.2204% ( 10) 00:09:49.586 12949.282 - 13001.921: 96.2767% ( 8) 00:09:49.586 13001.921 - 13054.561: 96.3260% ( 7) 00:09:49.586 13054.561 - 13107.200: 96.3682% ( 6) 00:09:49.586 13107.200 - 13159.839: 96.4105% ( 6) 00:09:49.586 13159.839 - 13212.479: 96.4597% ( 7) 00:09:49.586 13212.479 - 13265.118: 96.5090% ( 7) 00:09:49.586 13265.118 - 13317.757: 96.5512% ( 6) 00:09:49.586 13317.757 - 13370.397: 96.5794% ( 4) 00:09:49.586 13370.397 - 13423.036: 96.5935% ( 2) 00:09:49.586 13423.036 - 13475.676: 96.6146% ( 3) 00:09:49.586 13475.676 - 13580.954: 96.6498% ( 5) 00:09:49.586 13580.954 - 13686.233: 96.6920% ( 6) 00:09:49.586 13686.233 - 13791.512: 96.7272% ( 5) 00:09:49.586 13791.512 - 13896.790: 96.7905% ( 9) 00:09:49.586 13896.790 - 14002.069: 96.8539% ( 9) 00:09:49.586 14002.069 - 14107.348: 96.9243% ( 10) 00:09:49.586 14107.348 - 14212.627: 96.9735% ( 7) 00:09:49.586 14212.627 - 14317.905: 97.0158% ( 6) 00:09:49.586 14317.905 - 14423.184: 97.0510% ( 5) 00:09:49.586 14423.184 - 14528.463: 97.0932% ( 6) 00:09:49.586 14528.463 - 14633.741: 97.1213% ( 4) 00:09:49.586 14633.741 - 14739.020: 97.1636% ( 6) 00:09:49.586 14739.020 - 14844.299: 97.1988% ( 5) 00:09:49.586 14844.299 - 14949.578: 97.2340% ( 5) 00:09:49.586 14949.578 - 15054.856: 97.2691% ( 5) 00:09:49.586 15054.856 - 15160.135: 97.2973% ( 4) 00:09:49.586 15265.414 - 15370.692: 97.3325% ( 5) 00:09:49.586 15370.692 - 15475.971: 97.3818% ( 7) 00:09:49.586 15475.971 - 15581.250: 97.4310% ( 7) 00:09:49.586 15581.250 - 15686.529: 97.4803% ( 7) 00:09:49.586 15686.529 - 15791.807: 97.5296% ( 7) 00:09:49.586 15791.807 - 15897.086: 97.5788% ( 7) 00:09:49.586 15897.086 - 16002.365: 97.6140% ( 5) 00:09:49.586 16002.365 - 16107.643: 97.6492% ( 5) 00:09:49.586 16107.643 - 16212.922: 97.6985% ( 7) 00:09:49.586 16212.922 - 16318.201: 97.7618% ( 9) 00:09:49.586 16318.201 - 16423.480: 97.8181% ( 8) 00:09:49.586 16423.480 - 16528.758: 97.8674% ( 7) 00:09:49.586 16528.758 - 16634.037: 97.9167% ( 7) 00:09:49.586 16634.037 - 16739.316: 97.9659% ( 7) 00:09:49.586 16739.316 - 16844.594: 98.0082% ( 6) 00:09:49.586 16844.594 - 16949.873: 98.0574% ( 7) 00:09:49.586 16949.873 - 17055.152: 98.1067% ( 7) 00:09:49.586 17055.152 - 17160.431: 98.1560% ( 7) 00:09:49.586 17160.431 - 17265.709: 98.1982% ( 6) 00:09:49.586 17476.267 - 17581.545: 98.2264% ( 4) 00:09:49.586 17581.545 - 17686.824: 98.2615% ( 5) 00:09:49.587 17686.824 - 17792.103: 98.3108% ( 7) 00:09:49.587 17792.103 - 17897.382: 98.3530% ( 6) 00:09:49.587 17897.382 - 18002.660: 98.3882% ( 5) 00:09:49.587 18002.660 - 18107.939: 98.4375% ( 7) 00:09:49.587 18107.939 - 18213.218: 98.4797% ( 6) 00:09:49.587 18213.218 - 18318.496: 98.5783% ( 14) 00:09:49.587 18318.496 - 18423.775: 98.6698% ( 13) 00:09:49.587 18423.775 - 18529.054: 98.7613% ( 13) 00:09:49.587 18529.054 - 18634.333: 98.8457% ( 12) 00:09:49.587 18634.333 - 18739.611: 98.9020% ( 8) 00:09:49.587 18739.611 - 18844.890: 98.9513% ( 7) 00:09:49.587 18844.890 - 18950.169: 99.0006% ( 7) 00:09:49.587 18950.169 - 19055.447: 99.0569% ( 8) 00:09:49.587 19055.447 - 19160.726: 99.0991% ( 6) 00:09:49.587 27161.908 - 27372.466: 99.1413% ( 6) 00:09:49.587 27372.466 - 27583.023: 99.2469% ( 15) 00:09:49.587 27583.023 - 27793.581: 99.3454% ( 14) 00:09:49.587 27793.581 - 28004.138: 99.4440% ( 14) 00:09:49.587 28004.138 - 28214.696: 99.5355% ( 13) 00:09:49.587 28214.696 - 28425.253: 99.5495% ( 2) 00:09:49.587 32636.402 - 32846.959: 99.5566% ( 1) 00:09:49.587 32846.959 - 33057.516: 99.6551% ( 14) 00:09:49.587 33057.516 - 33268.074: 99.7466% ( 13) 00:09:49.587 33268.074 - 33478.631: 99.8381% ( 13) 00:09:49.587 33478.631 - 33689.189: 99.9296% ( 13) 00:09:49.587 33689.189 - 33899.746: 100.0000% ( 10) 00:09:49.587 00:09:49.587 15:45:24 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:50.990 Initializing NVMe Controllers 00:09:50.990 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:50.990 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:50.990 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:50.990 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:50.990 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:50.990 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:50.990 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:50.990 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:50.990 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:50.990 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:50.990 Initialization complete. Launching workers. 00:09:50.990 ======================================================== 00:09:50.990 Latency(us) 00:09:50.990 Device Information : IOPS MiB/s Average min max 00:09:50.990 PCIE (0000:00:10.0) NSID 1 from core 0: 12641.26 148.14 10137.61 7442.14 32891.69 00:09:50.990 PCIE (0000:00:11.0) NSID 1 from core 0: 12641.26 148.14 10131.60 7214.07 32565.68 00:09:50.990 PCIE (0000:00:13.0) NSID 1 from core 0: 12641.26 148.14 10124.99 6508.79 33092.14 00:09:50.990 PCIE (0000:00:12.0) NSID 1 from core 0: 12641.26 148.14 10117.80 6440.61 32795.57 00:09:50.990 PCIE (0000:00:12.0) NSID 2 from core 0: 12641.26 148.14 10110.94 6007.05 32511.34 00:09:50.990 PCIE (0000:00:12.0) NSID 3 from core 0: 12705.10 148.89 10053.41 5866.98 26012.92 00:09:50.990 ======================================================== 00:09:50.990 Total : 75911.38 889.59 10112.67 5866.98 33092.14 00:09:50.990 00:09:50.990 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:50.990 ================================================================================= 00:09:50.990 1.00000% : 8369.658us 00:09:50.990 10.00000% : 8896.051us 00:09:50.990 25.00000% : 9211.888us 00:09:50.990 50.00000% : 9633.002us 00:09:50.990 75.00000% : 10422.593us 00:09:50.990 90.00000% : 11159.544us 00:09:50.990 95.00000% : 13107.200us 00:09:50.990 98.00000% : 15475.971us 00:09:50.990 99.00000% : 25161.613us 00:09:50.990 99.50000% : 31794.172us 00:09:50.990 99.90000% : 32846.959us 00:09:50.990 99.99000% : 33057.516us 00:09:50.990 99.99900% : 33057.516us 00:09:50.990 99.99990% : 33057.516us 00:09:50.990 99.99999% : 33057.516us 00:09:50.990 00:09:50.990 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:50.990 ================================================================================= 00:09:50.990 1.00000% : 8264.379us 00:09:50.990 10.00000% : 8896.051us 00:09:50.990 25.00000% : 9264.527us 00:09:50.990 50.00000% : 9633.002us 00:09:50.990 75.00000% : 10369.953us 00:09:50.990 90.00000% : 11054.265us 00:09:50.990 95.00000% : 12686.085us 00:09:50.990 98.00000% : 16002.365us 00:09:50.990 99.00000% : 24951.055us 00:09:50.990 99.50000% : 31794.172us 00:09:50.990 99.90000% : 32425.844us 00:09:50.990 99.99000% : 32636.402us 00:09:50.990 99.99900% : 32636.402us 00:09:50.990 99.99990% : 32636.402us 00:09:50.990 99.99999% : 32636.402us 00:09:50.990 00:09:50.990 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:50.990 ================================================================================= 00:09:50.990 1.00000% : 8317.018us 00:09:50.990 10.00000% : 8896.051us 00:09:50.990 25.00000% : 9211.888us 00:09:50.990 50.00000% : 9633.002us 00:09:50.990 75.00000% : 10422.593us 00:09:50.990 90.00000% : 11106.904us 00:09:50.990 95.00000% : 12896.643us 00:09:50.990 98.00000% : 16002.365us 00:09:50.990 99.00000% : 25582.728us 00:09:50.990 99.50000% : 32215.287us 00:09:50.990 99.90000% : 33057.516us 00:09:50.990 99.99000% : 33268.074us 00:09:50.990 99.99900% : 33268.074us 00:09:50.990 99.99990% : 33268.074us 00:09:50.990 99.99999% : 33268.074us 00:09:50.990 00:09:50.990 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:50.990 ================================================================================= 00:09:50.990 1.00000% : 8264.379us 00:09:50.990 10.00000% : 8896.051us 00:09:50.990 25.00000% : 9211.888us 00:09:50.990 50.00000% : 9633.002us 00:09:50.990 75.00000% : 10369.953us 00:09:50.990 90.00000% : 11106.904us 00:09:50.990 95.00000% : 13265.118us 00:09:50.990 98.00000% : 15791.807us 00:09:50.990 99.00000% : 25056.334us 00:09:50.990 99.50000% : 31794.172us 00:09:50.990 99.90000% : 32636.402us 00:09:50.990 99.99000% : 32846.959us 00:09:50.990 99.99900% : 32846.959us 00:09:50.990 99.99990% : 32846.959us 00:09:50.990 99.99999% : 32846.959us 00:09:50.990 00:09:50.990 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:50.990 ================================================================================= 00:09:50.990 1.00000% : 8106.461us 00:09:50.990 10.00000% : 8896.051us 00:09:50.990 25.00000% : 9211.888us 00:09:50.990 50.00000% : 9633.002us 00:09:50.990 75.00000% : 10369.953us 00:09:50.990 90.00000% : 11001.626us 00:09:50.990 95.00000% : 13475.676us 00:09:50.991 98.00000% : 15581.250us 00:09:50.991 99.00000% : 24845.777us 00:09:50.991 99.50000% : 31583.614us 00:09:50.991 99.90000% : 32425.844us 00:09:50.991 99.99000% : 32636.402us 00:09:50.991 99.99900% : 32636.402us 00:09:50.991 99.99990% : 32636.402us 00:09:50.991 99.99999% : 32636.402us 00:09:50.991 00:09:50.991 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:50.991 ================================================================================= 00:09:50.991 1.00000% : 8053.822us 00:09:50.991 10.00000% : 8948.691us 00:09:50.991 25.00000% : 9211.888us 00:09:50.991 50.00000% : 9633.002us 00:09:50.991 75.00000% : 10369.953us 00:09:50.991 90.00000% : 11001.626us 00:09:50.991 95.00000% : 13159.839us 00:09:50.991 98.00000% : 15160.135us 00:09:50.991 99.00000% : 18634.333us 00:09:50.991 99.50000% : 25056.334us 00:09:50.991 99.90000% : 25898.564us 00:09:50.991 99.99000% : 26003.843us 00:09:50.991 99.99900% : 26109.121us 00:09:50.991 99.99990% : 26109.121us 00:09:50.991 99.99999% : 26109.121us 00:09:50.991 00:09:50.991 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:50.991 ============================================================================== 00:09:50.991 Range in us Cumulative IO count 00:09:50.991 7422.149 - 7474.789: 0.0395% ( 5) 00:09:50.991 7474.789 - 7527.428: 0.1499% ( 14) 00:09:50.991 7527.428 - 7580.067: 0.1578% ( 1) 00:09:50.991 7580.067 - 7632.707: 0.1736% ( 2) 00:09:50.991 7632.707 - 7685.346: 0.1973% ( 3) 00:09:50.991 7685.346 - 7737.986: 0.2525% ( 7) 00:09:50.991 7737.986 - 7790.625: 0.2920% ( 5) 00:09:50.991 7790.625 - 7843.264: 0.3393% ( 6) 00:09:50.991 7843.264 - 7895.904: 0.4104% ( 9) 00:09:50.991 7895.904 - 7948.543: 0.5051% ( 12) 00:09:50.991 8159.100 - 8211.740: 0.5445% ( 5) 00:09:50.991 8211.740 - 8264.379: 0.6471% ( 13) 00:09:50.991 8264.379 - 8317.018: 0.8523% ( 26) 00:09:50.991 8317.018 - 8369.658: 1.1285% ( 35) 00:09:50.991 8369.658 - 8422.297: 1.3573% ( 29) 00:09:50.991 8422.297 - 8474.937: 1.5862% ( 29) 00:09:50.991 8474.937 - 8527.576: 1.9571% ( 47) 00:09:50.991 8527.576 - 8580.215: 2.6121% ( 83) 00:09:50.991 8580.215 - 8632.855: 3.6379% ( 130) 00:09:50.991 8632.855 - 8685.494: 4.5770% ( 119) 00:09:50.991 8685.494 - 8738.133: 5.8633% ( 163) 00:09:50.991 8738.133 - 8790.773: 7.5994% ( 220) 00:09:50.991 8790.773 - 8843.412: 9.4223% ( 231) 00:09:50.991 8843.412 - 8896.051: 11.3636% ( 246) 00:09:50.991 8896.051 - 8948.691: 13.7705% ( 305) 00:09:50.991 8948.691 - 9001.330: 15.7197% ( 247) 00:09:50.991 9001.330 - 9053.969: 17.5584% ( 233) 00:09:50.991 9053.969 - 9106.609: 20.1231% ( 325) 00:09:50.991 9106.609 - 9159.248: 23.0982% ( 377) 00:09:50.991 9159.248 - 9211.888: 25.8286% ( 346) 00:09:50.991 9211.888 - 9264.527: 28.6616% ( 359) 00:09:50.991 9264.527 - 9317.166: 31.8971% ( 410) 00:09:50.991 9317.166 - 9369.806: 35.1247% ( 409) 00:09:50.991 9369.806 - 9422.445: 38.1550% ( 384) 00:09:50.991 9422.445 - 9475.084: 41.6667% ( 445) 00:09:50.991 9475.084 - 9527.724: 45.0521% ( 429) 00:09:50.991 9527.724 - 9580.363: 48.1613% ( 394) 00:09:50.991 9580.363 - 9633.002: 50.6471% ( 315) 00:09:50.991 9633.002 - 9685.642: 52.8330% ( 277) 00:09:50.991 9685.642 - 9738.281: 55.0900% ( 286) 00:09:50.991 9738.281 - 9790.920: 57.0865% ( 253) 00:09:50.991 9790.920 - 9843.560: 58.7200% ( 207) 00:09:50.991 9843.560 - 9896.199: 60.5114% ( 227) 00:09:50.991 9896.199 - 9948.839: 62.1765% ( 211) 00:09:50.991 9948.839 - 10001.478: 63.7705% ( 202) 00:09:50.991 10001.478 - 10054.117: 65.6250% ( 235) 00:09:50.991 10054.117 - 10106.757: 67.6689% ( 259) 00:09:50.991 10106.757 - 10159.396: 69.1130% ( 183) 00:09:50.991 10159.396 - 10212.035: 70.4940% ( 175) 00:09:50.991 10212.035 - 10264.675: 72.1117% ( 205) 00:09:50.991 10264.675 - 10317.314: 73.5480% ( 182) 00:09:50.991 10317.314 - 10369.953: 74.9921% ( 183) 00:09:50.991 10369.953 - 10422.593: 76.3810% ( 176) 00:09:50.991 10422.593 - 10475.232: 77.6989% ( 167) 00:09:50.991 10475.232 - 10527.871: 78.9141% ( 154) 00:09:50.991 10527.871 - 10580.511: 80.0032% ( 138) 00:09:50.991 10580.511 - 10633.150: 81.3052% ( 165) 00:09:50.991 10633.150 - 10685.790: 82.5205% ( 154) 00:09:50.991 10685.790 - 10738.429: 83.8305% ( 166) 00:09:50.991 10738.429 - 10791.068: 85.1484% ( 167) 00:09:50.991 10791.068 - 10843.708: 86.3005% ( 146) 00:09:50.991 10843.708 - 10896.347: 87.3580% ( 134) 00:09:50.991 10896.347 - 10948.986: 88.0445% ( 87) 00:09:50.991 10948.986 - 11001.626: 88.6679% ( 79) 00:09:50.991 11001.626 - 11054.265: 89.3387% ( 85) 00:09:50.991 11054.265 - 11106.904: 89.9858% ( 82) 00:09:50.991 11106.904 - 11159.544: 90.5540% ( 72) 00:09:50.991 11159.544 - 11212.183: 91.0827% ( 67) 00:09:50.991 11212.183 - 11264.822: 91.4931% ( 52) 00:09:50.991 11264.822 - 11317.462: 91.8008% ( 39) 00:09:50.991 11317.462 - 11370.101: 92.1402% ( 43) 00:09:50.991 11370.101 - 11422.741: 92.3374% ( 25) 00:09:50.991 11422.741 - 11475.380: 92.5742% ( 30) 00:09:50.991 11475.380 - 11528.019: 92.7162% ( 18) 00:09:50.991 11528.019 - 11580.659: 92.8898% ( 22) 00:09:50.991 11580.659 - 11633.298: 93.0240% ( 17) 00:09:50.991 11633.298 - 11685.937: 93.1503% ( 16) 00:09:50.991 11685.937 - 11738.577: 93.2607% ( 14) 00:09:50.991 11738.577 - 11791.216: 93.3396% ( 10) 00:09:50.991 11791.216 - 11843.855: 93.4107% ( 9) 00:09:50.991 11843.855 - 11896.495: 93.4896% ( 10) 00:09:50.991 11896.495 - 11949.134: 93.5606% ( 9) 00:09:50.991 11949.134 - 12001.773: 93.6553% ( 12) 00:09:50.991 12001.773 - 12054.413: 93.8131% ( 20) 00:09:50.991 12054.413 - 12107.052: 93.8763% ( 8) 00:09:50.991 12107.052 - 12159.692: 93.9315% ( 7) 00:09:50.991 12159.692 - 12212.331: 94.0341% ( 13) 00:09:50.991 12212.331 - 12264.970: 94.2077% ( 22) 00:09:50.991 12264.970 - 12317.610: 94.3024% ( 12) 00:09:50.991 12317.610 - 12370.249: 94.3419% ( 5) 00:09:50.991 12370.249 - 12422.888: 94.3734% ( 4) 00:09:50.991 12422.888 - 12475.528: 94.4050% ( 4) 00:09:50.991 12475.528 - 12528.167: 94.4366% ( 4) 00:09:50.991 12633.446 - 12686.085: 94.4997% ( 8) 00:09:50.991 12686.085 - 12738.724: 94.5628% ( 8) 00:09:50.991 12738.724 - 12791.364: 94.6496% ( 11) 00:09:50.991 12791.364 - 12844.003: 94.7838% ( 17) 00:09:50.991 12844.003 - 12896.643: 94.8390% ( 7) 00:09:50.991 12896.643 - 12949.282: 94.8864% ( 6) 00:09:50.991 12949.282 - 13001.921: 94.9416% ( 7) 00:09:50.991 13001.921 - 13054.561: 94.9968% ( 7) 00:09:50.991 13054.561 - 13107.200: 95.0521% ( 7) 00:09:50.991 13107.200 - 13159.839: 95.1152% ( 8) 00:09:50.991 13159.839 - 13212.479: 95.2257% ( 14) 00:09:50.991 13212.479 - 13265.118: 95.2730% ( 6) 00:09:50.991 13265.118 - 13317.757: 95.3125% ( 5) 00:09:50.991 13317.757 - 13370.397: 95.4388% ( 16) 00:09:50.991 13370.397 - 13423.036: 95.6360% ( 25) 00:09:50.991 13423.036 - 13475.676: 95.7939% ( 20) 00:09:50.991 13475.676 - 13580.954: 95.8254% ( 4) 00:09:50.991 13580.954 - 13686.233: 95.8491% ( 3) 00:09:50.991 13686.233 - 13791.512: 95.8728% ( 3) 00:09:50.991 13791.512 - 13896.790: 95.9122% ( 5) 00:09:50.991 13896.790 - 14002.069: 95.9991% ( 11) 00:09:50.991 14002.069 - 14107.348: 96.1095% ( 14) 00:09:50.991 14107.348 - 14212.627: 96.1727% ( 8) 00:09:50.991 14212.627 - 14317.905: 96.2516% ( 10) 00:09:50.991 14317.905 - 14423.184: 96.3305% ( 10) 00:09:50.991 14423.184 - 14528.463: 96.4804% ( 19) 00:09:50.991 14528.463 - 14633.741: 96.6461% ( 21) 00:09:50.991 14633.741 - 14739.020: 96.7093% ( 8) 00:09:50.991 14739.020 - 14844.299: 96.7724% ( 8) 00:09:50.991 14844.299 - 14949.578: 96.8671% ( 12) 00:09:50.991 14949.578 - 15054.856: 97.0486% ( 23) 00:09:50.991 15054.856 - 15160.135: 97.3327% ( 36) 00:09:50.991 15160.135 - 15265.414: 97.6326% ( 38) 00:09:50.991 15265.414 - 15370.692: 97.9009% ( 34) 00:09:50.991 15370.692 - 15475.971: 98.0666% ( 21) 00:09:50.991 15475.971 - 15581.250: 98.1771% ( 14) 00:09:50.991 15581.250 - 15686.529: 98.2323% ( 7) 00:09:50.991 15686.529 - 15791.807: 98.2955% ( 8) 00:09:50.991 15791.807 - 15897.086: 98.3507% ( 7) 00:09:50.991 15897.086 - 16002.365: 98.4059% ( 7) 00:09:50.991 16002.365 - 16107.643: 98.4691% ( 8) 00:09:50.991 16107.643 - 16212.922: 98.4848% ( 2) 00:09:50.991 16844.594 - 16949.873: 98.5322% ( 6) 00:09:50.991 16949.873 - 17055.152: 98.5717% ( 5) 00:09:50.991 17055.152 - 17160.431: 98.6032% ( 4) 00:09:50.991 17160.431 - 17265.709: 98.6585% ( 7) 00:09:50.991 17265.709 - 17370.988: 98.6979% ( 5) 00:09:50.991 17370.988 - 17476.267: 98.7374% ( 5) 00:09:50.991 17476.267 - 17581.545: 98.7768% ( 5) 00:09:50.991 17581.545 - 17686.824: 98.8242% ( 6) 00:09:50.991 17686.824 - 17792.103: 98.8715% ( 6) 00:09:50.991 17792.103 - 17897.382: 98.9031% ( 4) 00:09:50.991 17897.382 - 18002.660: 98.9504% ( 6) 00:09:50.991 18002.660 - 18107.939: 98.9820% ( 4) 00:09:50.991 18107.939 - 18213.218: 98.9899% ( 1) 00:09:50.991 25056.334 - 25161.613: 99.1241% ( 17) 00:09:50.991 25161.613 - 25266.892: 99.2345% ( 14) 00:09:50.991 25266.892 - 25372.170: 99.2661% ( 4) 00:09:50.991 25372.170 - 25477.449: 99.3056% ( 5) 00:09:50.991 25477.449 - 25582.728: 99.3608% ( 7) 00:09:50.991 25582.728 - 25688.006: 99.3766% ( 2) 00:09:50.991 25688.006 - 25793.285: 99.4081% ( 4) 00:09:50.991 25793.285 - 25898.564: 99.4555% ( 6) 00:09:50.991 25898.564 - 26003.843: 99.4949% ( 5) 00:09:50.991 31583.614 - 31794.172: 99.5186% ( 3) 00:09:50.991 31794.172 - 32004.729: 99.6212% ( 13) 00:09:50.991 32004.729 - 32215.287: 99.7001% ( 10) 00:09:50.991 32215.287 - 32425.844: 99.8027% ( 13) 00:09:50.991 32425.844 - 32636.402: 99.8974% ( 12) 00:09:50.991 32636.402 - 32846.959: 99.9842% ( 11) 00:09:50.991 32846.959 - 33057.516: 100.0000% ( 2) 00:09:50.991 00:09:50.991 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:50.991 ============================================================================== 00:09:50.991 Range in us Cumulative IO count 00:09:50.991 7211.592 - 7264.231: 0.0079% ( 1) 00:09:50.991 7316.871 - 7369.510: 0.0158% ( 1) 00:09:50.991 7369.510 - 7422.149: 0.0395% ( 3) 00:09:50.992 7422.149 - 7474.789: 0.1105% ( 9) 00:09:50.992 7474.789 - 7527.428: 0.3709% ( 33) 00:09:50.992 7527.428 - 7580.067: 0.4340% ( 8) 00:09:50.992 7580.067 - 7632.707: 0.4656% ( 4) 00:09:50.992 7632.707 - 7685.346: 0.4893% ( 3) 00:09:50.992 7685.346 - 7737.986: 0.5051% ( 2) 00:09:50.992 8001.182 - 8053.822: 0.5129% ( 1) 00:09:50.992 8053.822 - 8106.461: 0.5445% ( 4) 00:09:50.992 8106.461 - 8159.100: 0.6155% ( 9) 00:09:50.992 8159.100 - 8211.740: 0.7339% ( 15) 00:09:50.992 8211.740 - 8264.379: 1.0574% ( 41) 00:09:50.992 8264.379 - 8317.018: 1.3968% ( 43) 00:09:50.992 8317.018 - 8369.658: 1.8071% ( 52) 00:09:50.992 8369.658 - 8422.297: 2.0360% ( 29) 00:09:50.992 8422.297 - 8474.937: 2.2175% ( 23) 00:09:50.992 8474.937 - 8527.576: 2.4858% ( 34) 00:09:50.992 8527.576 - 8580.215: 2.7541% ( 34) 00:09:50.992 8580.215 - 8632.855: 3.2907% ( 68) 00:09:50.992 8632.855 - 8685.494: 4.4350% ( 145) 00:09:50.992 8685.494 - 8738.133: 5.3030% ( 110) 00:09:50.992 8738.133 - 8790.773: 6.6840% ( 175) 00:09:50.992 8790.773 - 8843.412: 8.6174% ( 245) 00:09:50.992 8843.412 - 8896.051: 10.5350% ( 243) 00:09:50.992 8896.051 - 8948.691: 12.8551% ( 294) 00:09:50.992 8948.691 - 9001.330: 14.6622% ( 229) 00:09:50.992 9001.330 - 9053.969: 16.8640% ( 279) 00:09:50.992 9053.969 - 9106.609: 19.0341% ( 275) 00:09:50.992 9106.609 - 9159.248: 21.2910% ( 286) 00:09:50.992 9159.248 - 9211.888: 24.2661% ( 377) 00:09:50.992 9211.888 - 9264.527: 27.8725% ( 457) 00:09:50.992 9264.527 - 9317.166: 31.7156% ( 487) 00:09:50.992 9317.166 - 9369.806: 35.8349% ( 522) 00:09:50.992 9369.806 - 9422.445: 39.5281% ( 468) 00:09:50.992 9422.445 - 9475.084: 42.9845% ( 438) 00:09:50.992 9475.084 - 9527.724: 46.3305% ( 424) 00:09:50.992 9527.724 - 9580.363: 48.9820% ( 336) 00:09:50.992 9580.363 - 9633.002: 52.1307% ( 399) 00:09:50.992 9633.002 - 9685.642: 54.5218% ( 303) 00:09:50.992 9685.642 - 9738.281: 56.6840% ( 274) 00:09:50.992 9738.281 - 9790.920: 58.3254% ( 208) 00:09:50.992 9790.920 - 9843.560: 59.6907% ( 173) 00:09:50.992 9843.560 - 9896.199: 61.0164% ( 168) 00:09:50.992 9896.199 - 9948.839: 62.6342% ( 205) 00:09:50.992 9948.839 - 10001.478: 63.8652% ( 156) 00:09:50.992 10001.478 - 10054.117: 65.2462% ( 175) 00:09:50.992 10054.117 - 10106.757: 67.0928% ( 234) 00:09:50.992 10106.757 - 10159.396: 68.7658% ( 212) 00:09:50.992 10159.396 - 10212.035: 70.6834% ( 243) 00:09:50.992 10212.035 - 10264.675: 72.4984% ( 230) 00:09:50.992 10264.675 - 10317.314: 74.1319% ( 207) 00:09:50.992 10317.314 - 10369.953: 75.5682% ( 182) 00:09:50.992 10369.953 - 10422.593: 76.8229% ( 159) 00:09:50.992 10422.593 - 10475.232: 78.5432% ( 218) 00:09:50.992 10475.232 - 10527.871: 80.0979% ( 197) 00:09:50.992 10527.871 - 10580.511: 81.3526% ( 159) 00:09:50.992 10580.511 - 10633.150: 82.7967% ( 183) 00:09:50.992 10633.150 - 10685.790: 83.9646% ( 148) 00:09:50.992 10685.790 - 10738.429: 84.9984% ( 131) 00:09:50.992 10738.429 - 10791.068: 85.9927% ( 126) 00:09:50.992 10791.068 - 10843.708: 86.9949% ( 127) 00:09:50.992 10843.708 - 10896.347: 87.9498% ( 121) 00:09:50.992 10896.347 - 10948.986: 88.8494% ( 114) 00:09:50.992 10948.986 - 11001.626: 89.4886% ( 81) 00:09:50.992 11001.626 - 11054.265: 90.0568% ( 72) 00:09:50.992 11054.265 - 11106.904: 90.6013% ( 69) 00:09:50.992 11106.904 - 11159.544: 90.9722% ( 47) 00:09:50.992 11159.544 - 11212.183: 91.2090% ( 30) 00:09:50.992 11212.183 - 11264.822: 91.4062% ( 25) 00:09:50.992 11264.822 - 11317.462: 91.6193% ( 27) 00:09:50.992 11317.462 - 11370.101: 91.7693% ( 19) 00:09:50.992 11370.101 - 11422.741: 91.9271% ( 20) 00:09:50.992 11422.741 - 11475.380: 92.0770% ( 19) 00:09:50.992 11475.380 - 11528.019: 92.2506% ( 22) 00:09:50.992 11528.019 - 11580.659: 92.2980% ( 6) 00:09:50.992 11580.659 - 11633.298: 92.3374% ( 5) 00:09:50.992 11633.298 - 11685.937: 92.3690% ( 4) 00:09:50.992 11685.937 - 11738.577: 92.3927% ( 3) 00:09:50.992 11738.577 - 11791.216: 92.4164% ( 3) 00:09:50.992 11791.216 - 11843.855: 92.4400% ( 3) 00:09:50.992 11843.855 - 11896.495: 92.4874% ( 6) 00:09:50.992 11896.495 - 11949.134: 92.5663% ( 10) 00:09:50.992 11949.134 - 12001.773: 92.6531% ( 11) 00:09:50.992 12001.773 - 12054.413: 92.7478% ( 12) 00:09:50.992 12054.413 - 12107.052: 92.8741% ( 16) 00:09:50.992 12107.052 - 12159.692: 93.0003% ( 16) 00:09:50.992 12159.692 - 12212.331: 93.1108% ( 14) 00:09:50.992 12212.331 - 12264.970: 93.2528% ( 18) 00:09:50.992 12264.970 - 12317.610: 93.3949% ( 18) 00:09:50.992 12317.610 - 12370.249: 93.6316% ( 30) 00:09:50.992 12370.249 - 12422.888: 93.8447% ( 27) 00:09:50.992 12422.888 - 12475.528: 94.1998% ( 45) 00:09:50.992 12475.528 - 12528.167: 94.4918% ( 37) 00:09:50.992 12528.167 - 12580.806: 94.7364% ( 31) 00:09:50.992 12580.806 - 12633.446: 94.9495% ( 27) 00:09:50.992 12633.446 - 12686.085: 95.0836% ( 17) 00:09:50.992 12686.085 - 12738.724: 95.2967% ( 27) 00:09:50.992 12738.724 - 12791.364: 95.3283% ( 4) 00:09:50.992 12791.364 - 12844.003: 95.3598% ( 4) 00:09:50.992 12844.003 - 12896.643: 95.4230% ( 8) 00:09:50.992 12896.643 - 12949.282: 95.5414% ( 15) 00:09:50.992 12949.282 - 13001.921: 95.6439% ( 13) 00:09:50.992 13001.921 - 13054.561: 95.7544% ( 14) 00:09:50.992 13054.561 - 13107.200: 95.8412% ( 11) 00:09:50.992 13107.200 - 13159.839: 95.8570% ( 2) 00:09:50.992 13159.839 - 13212.479: 95.8649% ( 1) 00:09:50.992 13212.479 - 13265.118: 95.8807% ( 2) 00:09:50.992 13265.118 - 13317.757: 95.8886% ( 1) 00:09:50.992 13317.757 - 13370.397: 95.9044% ( 2) 00:09:50.992 13370.397 - 13423.036: 95.9122% ( 1) 00:09:50.992 13423.036 - 13475.676: 95.9280% ( 2) 00:09:50.992 13475.676 - 13580.954: 95.9517% ( 3) 00:09:50.992 13580.954 - 13686.233: 95.9596% ( 1) 00:09:50.992 13686.233 - 13791.512: 95.9991% ( 5) 00:09:50.992 13791.512 - 13896.790: 96.0780% ( 10) 00:09:50.992 13896.790 - 14002.069: 96.1411% ( 8) 00:09:50.992 14002.069 - 14107.348: 96.2200% ( 10) 00:09:50.992 14107.348 - 14212.627: 96.3384% ( 15) 00:09:50.992 14212.627 - 14317.905: 96.3699% ( 4) 00:09:50.992 14317.905 - 14423.184: 96.4015% ( 4) 00:09:50.992 14423.184 - 14528.463: 96.4725% ( 9) 00:09:50.992 14528.463 - 14633.741: 96.5436% ( 9) 00:09:50.992 14633.741 - 14739.020: 96.5988% ( 7) 00:09:50.992 14739.020 - 14844.299: 96.6383% ( 5) 00:09:50.992 14844.299 - 14949.578: 96.6777% ( 5) 00:09:50.992 14949.578 - 15054.856: 96.8198% ( 18) 00:09:50.992 15054.856 - 15160.135: 96.9302% ( 14) 00:09:50.992 15160.135 - 15265.414: 97.0407% ( 14) 00:09:50.992 15265.414 - 15370.692: 97.1512% ( 14) 00:09:50.992 15370.692 - 15475.971: 97.3564% ( 26) 00:09:50.992 15475.971 - 15581.250: 97.5063% ( 19) 00:09:50.992 15581.250 - 15686.529: 97.5694% ( 8) 00:09:50.992 15686.529 - 15791.807: 97.6247% ( 7) 00:09:50.992 15791.807 - 15897.086: 97.9088% ( 36) 00:09:50.992 15897.086 - 16002.365: 98.0114% ( 13) 00:09:50.992 16002.365 - 16107.643: 98.1297% ( 15) 00:09:50.992 16107.643 - 16212.922: 98.2323% ( 13) 00:09:50.992 16212.922 - 16318.201: 98.3428% ( 14) 00:09:50.992 16318.201 - 16423.480: 98.4296% ( 11) 00:09:50.992 16423.480 - 16528.758: 98.5480% ( 15) 00:09:50.992 16528.758 - 16634.037: 98.6506% ( 13) 00:09:50.992 16634.037 - 16739.316: 98.6979% ( 6) 00:09:50.992 16739.316 - 16844.594: 98.7374% ( 5) 00:09:50.992 16844.594 - 16949.873: 98.7768% ( 5) 00:09:50.992 16949.873 - 17055.152: 98.8242% ( 6) 00:09:50.992 17055.152 - 17160.431: 98.8794% ( 7) 00:09:50.992 17160.431 - 17265.709: 98.9268% ( 6) 00:09:50.992 17265.709 - 17370.988: 98.9820% ( 7) 00:09:50.992 17370.988 - 17476.267: 98.9899% ( 1) 00:09:50.992 24845.777 - 24951.055: 99.0372% ( 6) 00:09:50.992 24951.055 - 25056.334: 99.0925% ( 7) 00:09:50.992 25056.334 - 25161.613: 99.1477% ( 7) 00:09:50.992 25161.613 - 25266.892: 99.2030% ( 7) 00:09:50.992 25266.892 - 25372.170: 99.2582% ( 7) 00:09:50.992 25372.170 - 25477.449: 99.3056% ( 6) 00:09:50.992 25477.449 - 25582.728: 99.3687% ( 8) 00:09:50.992 25582.728 - 25688.006: 99.4160% ( 6) 00:09:50.992 25688.006 - 25793.285: 99.4713% ( 7) 00:09:50.992 25793.285 - 25898.564: 99.4949% ( 3) 00:09:50.992 31583.614 - 31794.172: 99.5896% ( 12) 00:09:50.992 31794.172 - 32004.729: 99.7001% ( 14) 00:09:50.992 32004.729 - 32215.287: 99.8106% ( 14) 00:09:50.992 32215.287 - 32425.844: 99.9211% ( 14) 00:09:50.992 32425.844 - 32636.402: 100.0000% ( 10) 00:09:50.992 00:09:50.992 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:50.992 ============================================================================== 00:09:50.992 Range in us Cumulative IO count 00:09:50.992 6500.961 - 6527.280: 0.0079% ( 1) 00:09:50.992 6737.838 - 6790.477: 0.0158% ( 1) 00:09:50.992 6790.477 - 6843.116: 0.0631% ( 6) 00:09:50.992 6843.116 - 6895.756: 0.1263% ( 8) 00:09:50.992 6895.756 - 6948.395: 0.1894% ( 8) 00:09:50.992 6948.395 - 7001.035: 0.3630% ( 22) 00:09:50.992 7001.035 - 7053.674: 0.4261% ( 8) 00:09:50.992 7053.674 - 7106.313: 0.4498% ( 3) 00:09:50.992 7106.313 - 7158.953: 0.4814% ( 4) 00:09:50.992 7158.953 - 7211.592: 0.5051% ( 3) 00:09:50.992 8053.822 - 8106.461: 0.5129% ( 1) 00:09:50.992 8106.461 - 8159.100: 0.5919% ( 10) 00:09:50.992 8159.100 - 8211.740: 0.7102% ( 15) 00:09:50.992 8211.740 - 8264.379: 0.8444% ( 17) 00:09:50.992 8264.379 - 8317.018: 1.1758% ( 42) 00:09:50.992 8317.018 - 8369.658: 1.4678% ( 37) 00:09:50.992 8369.658 - 8422.297: 1.8939% ( 54) 00:09:50.992 8422.297 - 8474.937: 2.4700% ( 73) 00:09:50.992 8474.937 - 8527.576: 3.2592% ( 100) 00:09:50.992 8527.576 - 8580.215: 4.1430% ( 112) 00:09:50.992 8580.215 - 8632.855: 5.1373% ( 126) 00:09:50.992 8632.855 - 8685.494: 6.0685% ( 118) 00:09:50.992 8685.494 - 8738.133: 7.2128% ( 145) 00:09:50.992 8738.133 - 8790.773: 8.3728% ( 147) 00:09:50.992 8790.773 - 8843.412: 9.8169% ( 183) 00:09:50.992 8843.412 - 8896.051: 11.4504% ( 207) 00:09:50.992 8896.051 - 8948.691: 13.4628% ( 255) 00:09:50.992 8948.691 - 9001.330: 15.7355% ( 288) 00:09:50.992 9001.330 - 9053.969: 17.8030% ( 262) 00:09:50.992 9053.969 - 9106.609: 20.6203% ( 357) 00:09:50.992 9106.609 - 9159.248: 23.7374% ( 395) 00:09:50.992 9159.248 - 9211.888: 26.6651% ( 371) 00:09:50.992 9211.888 - 9264.527: 29.8059% ( 398) 00:09:50.993 9264.527 - 9317.166: 32.9072% ( 393) 00:09:50.993 9317.166 - 9369.806: 36.0953% ( 404) 00:09:50.993 9369.806 - 9422.445: 39.2282% ( 397) 00:09:50.993 9422.445 - 9475.084: 42.2980% ( 389) 00:09:50.993 9475.084 - 9527.724: 45.3204% ( 383) 00:09:50.993 9527.724 - 9580.363: 47.9403% ( 332) 00:09:50.993 9580.363 - 9633.002: 50.2762% ( 296) 00:09:50.993 9633.002 - 9685.642: 52.8488% ( 326) 00:09:50.993 9685.642 - 9738.281: 54.7348% ( 239) 00:09:50.993 9738.281 - 9790.920: 56.5420% ( 229) 00:09:50.993 9790.920 - 9843.560: 58.3807% ( 233) 00:09:50.993 9843.560 - 9896.199: 60.0063% ( 206) 00:09:50.993 9896.199 - 9948.839: 61.5925% ( 201) 00:09:50.993 9948.839 - 10001.478: 63.4312% ( 233) 00:09:50.993 10001.478 - 10054.117: 65.2778% ( 234) 00:09:50.993 10054.117 - 10106.757: 67.1559% ( 238) 00:09:50.993 10106.757 - 10159.396: 69.0183% ( 236) 00:09:50.993 10159.396 - 10212.035: 70.4230% ( 178) 00:09:50.993 10212.035 - 10264.675: 72.0328% ( 204) 00:09:50.993 10264.675 - 10317.314: 73.5559% ( 193) 00:09:50.993 10317.314 - 10369.953: 74.9369% ( 175) 00:09:50.993 10369.953 - 10422.593: 76.3494% ( 179) 00:09:50.993 10422.593 - 10475.232: 77.5489% ( 152) 00:09:50.993 10475.232 - 10527.871: 79.3403% ( 227) 00:09:50.993 10527.871 - 10580.511: 80.6029% ( 160) 00:09:50.993 10580.511 - 10633.150: 81.7551% ( 146) 00:09:50.993 10633.150 - 10685.790: 82.9151% ( 147) 00:09:50.993 10685.790 - 10738.429: 84.0515% ( 144) 00:09:50.993 10738.429 - 10791.068: 85.1641% ( 141) 00:09:50.993 10791.068 - 10843.708: 86.2610% ( 139) 00:09:50.993 10843.708 - 10896.347: 87.2554% ( 126) 00:09:50.993 10896.347 - 10948.986: 88.2970% ( 132) 00:09:50.993 10948.986 - 11001.626: 89.0467% ( 95) 00:09:50.993 11001.626 - 11054.265: 89.9779% ( 118) 00:09:50.993 11054.265 - 11106.904: 90.5066% ( 67) 00:09:50.993 11106.904 - 11159.544: 90.9328% ( 54) 00:09:50.993 11159.544 - 11212.183: 91.1616% ( 29) 00:09:50.993 11212.183 - 11264.822: 91.3273% ( 21) 00:09:50.993 11264.822 - 11317.462: 91.4615% ( 17) 00:09:50.993 11317.462 - 11370.101: 91.7771% ( 40) 00:09:50.993 11370.101 - 11422.741: 92.0376% ( 33) 00:09:50.993 11422.741 - 11475.380: 92.1796% ( 18) 00:09:50.993 11475.380 - 11528.019: 92.5189% ( 43) 00:09:50.993 11528.019 - 11580.659: 92.6768% ( 20) 00:09:50.993 11580.659 - 11633.298: 92.7636% ( 11) 00:09:50.993 11633.298 - 11685.937: 92.8346% ( 9) 00:09:50.993 11685.937 - 11738.577: 92.8741% ( 5) 00:09:50.993 11738.577 - 11791.216: 92.8977% ( 3) 00:09:50.993 11791.216 - 11843.855: 92.9135% ( 2) 00:09:50.993 11843.855 - 11896.495: 92.9372% ( 3) 00:09:50.993 11896.495 - 11949.134: 92.9766% ( 5) 00:09:50.993 11949.134 - 12001.773: 93.0713% ( 12) 00:09:50.993 12001.773 - 12054.413: 93.1503% ( 10) 00:09:50.993 12054.413 - 12107.052: 93.2292% ( 10) 00:09:50.993 12107.052 - 12159.692: 93.3554% ( 16) 00:09:50.993 12159.692 - 12212.331: 93.5054% ( 19) 00:09:50.993 12212.331 - 12264.970: 93.6711% ( 21) 00:09:50.993 12264.970 - 12317.610: 93.8052% ( 17) 00:09:50.993 12317.610 - 12370.249: 93.9315% ( 16) 00:09:50.993 12370.249 - 12422.888: 94.2393% ( 39) 00:09:50.993 12422.888 - 12475.528: 94.3103% ( 9) 00:09:50.993 12475.528 - 12528.167: 94.3576% ( 6) 00:09:50.993 12528.167 - 12580.806: 94.4208% ( 8) 00:09:50.993 12580.806 - 12633.446: 94.4681% ( 6) 00:09:50.993 12633.446 - 12686.085: 94.5234% ( 7) 00:09:50.993 12686.085 - 12738.724: 94.6417% ( 15) 00:09:50.993 12738.724 - 12791.364: 94.7522% ( 14) 00:09:50.993 12791.364 - 12844.003: 94.9100% ( 20) 00:09:50.993 12844.003 - 12896.643: 95.0994% ( 24) 00:09:50.993 12896.643 - 12949.282: 95.2494% ( 19) 00:09:50.993 12949.282 - 13001.921: 95.3756% ( 16) 00:09:50.993 13001.921 - 13054.561: 95.4703% ( 12) 00:09:50.993 13054.561 - 13107.200: 95.5177% ( 6) 00:09:50.993 13107.200 - 13159.839: 95.5887% ( 9) 00:09:50.993 13159.839 - 13212.479: 95.6913% ( 13) 00:09:50.993 13212.479 - 13265.118: 95.7939% ( 13) 00:09:50.993 13265.118 - 13317.757: 95.8965% ( 13) 00:09:50.993 13317.757 - 13370.397: 95.9833% ( 11) 00:09:50.993 13370.397 - 13423.036: 96.0306% ( 6) 00:09:50.993 13423.036 - 13475.676: 96.0780% ( 6) 00:09:50.993 13475.676 - 13580.954: 96.1884% ( 14) 00:09:50.993 13580.954 - 13686.233: 96.2910% ( 13) 00:09:50.993 13686.233 - 13791.512: 96.3542% ( 8) 00:09:50.993 13791.512 - 13896.790: 96.3857% ( 4) 00:09:50.993 13896.790 - 14002.069: 96.4173% ( 4) 00:09:50.993 14002.069 - 14107.348: 96.4568% ( 5) 00:09:50.993 14107.348 - 14212.627: 96.5120% ( 7) 00:09:50.993 14212.627 - 14317.905: 96.5593% ( 6) 00:09:50.993 14317.905 - 14423.184: 96.6698% ( 14) 00:09:50.993 14423.184 - 14528.463: 96.8829% ( 27) 00:09:50.993 14528.463 - 14633.741: 97.0644% ( 23) 00:09:50.993 14633.741 - 14739.020: 97.2064% ( 18) 00:09:50.993 14739.020 - 14844.299: 97.3011% ( 12) 00:09:50.993 14844.299 - 14949.578: 97.3722% ( 9) 00:09:50.993 14949.578 - 15054.856: 97.4274% ( 7) 00:09:50.993 15054.856 - 15160.135: 97.5063% ( 10) 00:09:50.993 15160.135 - 15265.414: 97.5773% ( 9) 00:09:50.993 15265.414 - 15370.692: 97.6720% ( 12) 00:09:50.993 15370.692 - 15475.971: 97.8456% ( 22) 00:09:50.993 15475.971 - 15581.250: 97.9403% ( 12) 00:09:50.993 15581.250 - 15686.529: 97.9798% ( 5) 00:09:50.993 15686.529 - 15791.807: 97.9877% ( 1) 00:09:50.993 15791.807 - 15897.086: 97.9956% ( 1) 00:09:50.993 15897.086 - 16002.365: 98.0824% ( 11) 00:09:50.993 16002.365 - 16107.643: 98.1534% ( 9) 00:09:50.993 16107.643 - 16212.922: 98.1850% ( 4) 00:09:50.993 16212.922 - 16318.201: 98.2165% ( 4) 00:09:50.993 16318.201 - 16423.480: 98.2639% ( 6) 00:09:50.993 16423.480 - 16528.758: 98.3112% ( 6) 00:09:50.993 16528.758 - 16634.037: 98.3744% ( 8) 00:09:50.993 16634.037 - 16739.316: 98.5322% ( 20) 00:09:50.993 16739.316 - 16844.594: 98.6821% ( 19) 00:09:50.993 16844.594 - 16949.873: 98.8636% ( 23) 00:09:50.993 16949.873 - 17055.152: 98.9504% ( 11) 00:09:50.993 17055.152 - 17160.431: 98.9899% ( 5) 00:09:50.993 25477.449 - 25582.728: 99.0451% ( 7) 00:09:50.993 25582.728 - 25688.006: 99.1004% ( 7) 00:09:50.993 25688.006 - 25793.285: 99.1556% ( 7) 00:09:50.993 25793.285 - 25898.564: 99.2030% ( 6) 00:09:50.993 25898.564 - 26003.843: 99.2582% ( 7) 00:09:50.993 26003.843 - 26109.121: 99.3134% ( 7) 00:09:50.993 26109.121 - 26214.400: 99.3687% ( 7) 00:09:50.993 26214.400 - 26319.679: 99.4239% ( 7) 00:09:50.993 26319.679 - 26424.957: 99.4713% ( 6) 00:09:50.993 26424.957 - 26530.236: 99.4949% ( 3) 00:09:50.993 32004.729 - 32215.287: 99.5581% ( 8) 00:09:50.993 32215.287 - 32425.844: 99.6607% ( 13) 00:09:50.993 32425.844 - 32636.402: 99.7554% ( 12) 00:09:50.993 32636.402 - 32846.959: 99.8658% ( 14) 00:09:50.993 32846.959 - 33057.516: 99.9763% ( 14) 00:09:50.993 33057.516 - 33268.074: 100.0000% ( 3) 00:09:50.993 00:09:50.993 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:50.993 ============================================================================== 00:09:50.993 Range in us Cumulative IO count 00:09:50.993 6422.002 - 6448.321: 0.0079% ( 1) 00:09:50.993 6448.321 - 6474.641: 0.0237% ( 2) 00:09:50.993 6474.641 - 6500.961: 0.0395% ( 2) 00:09:50.993 6500.961 - 6527.280: 0.0552% ( 2) 00:09:50.993 6527.280 - 6553.600: 0.0789% ( 3) 00:09:50.993 6553.600 - 6579.920: 0.1184% ( 5) 00:09:50.993 6579.920 - 6606.239: 0.1499% ( 4) 00:09:50.993 6606.239 - 6632.559: 0.1736% ( 3) 00:09:50.993 6632.559 - 6658.879: 0.1973% ( 3) 00:09:50.993 6658.879 - 6685.198: 0.2289% ( 4) 00:09:50.993 6685.198 - 6711.518: 0.2999% ( 9) 00:09:50.993 6711.518 - 6737.838: 0.3235% ( 3) 00:09:50.993 6737.838 - 6790.477: 0.3709% ( 6) 00:09:50.993 6790.477 - 6843.116: 0.4261% ( 7) 00:09:50.993 6843.116 - 6895.756: 0.4814% ( 7) 00:09:50.993 6895.756 - 6948.395: 0.5051% ( 3) 00:09:50.993 7790.625 - 7843.264: 0.5129% ( 1) 00:09:50.993 8001.182 - 8053.822: 0.5208% ( 1) 00:09:50.993 8053.822 - 8106.461: 0.5997% ( 10) 00:09:50.993 8106.461 - 8159.100: 0.7023% ( 13) 00:09:50.993 8159.100 - 8211.740: 0.8365% ( 17) 00:09:50.993 8211.740 - 8264.379: 1.0101% ( 22) 00:09:50.993 8264.379 - 8317.018: 1.2468% ( 30) 00:09:50.993 8317.018 - 8369.658: 1.3573% ( 14) 00:09:50.993 8369.658 - 8422.297: 1.4915% ( 17) 00:09:50.993 8422.297 - 8474.937: 1.6414% ( 19) 00:09:50.993 8474.937 - 8527.576: 1.8624% ( 28) 00:09:50.993 8527.576 - 8580.215: 2.2017% ( 43) 00:09:50.993 8580.215 - 8632.855: 2.7068% ( 64) 00:09:50.993 8632.855 - 8685.494: 3.8668% ( 147) 00:09:50.993 8685.494 - 8738.133: 5.1294% ( 160) 00:09:50.993 8738.133 - 8790.773: 6.8419% ( 217) 00:09:50.993 8790.773 - 8843.412: 9.4381% ( 329) 00:09:50.993 8843.412 - 8896.051: 11.4662% ( 257) 00:09:50.993 8896.051 - 8948.691: 13.8573% ( 303) 00:09:50.993 8948.691 - 9001.330: 16.4694% ( 331) 00:09:50.993 9001.330 - 9053.969: 19.1840% ( 344) 00:09:50.993 9053.969 - 9106.609: 21.5436% ( 299) 00:09:50.993 9106.609 - 9159.248: 24.0609% ( 319) 00:09:50.993 9159.248 - 9211.888: 26.5625% ( 317) 00:09:50.993 9211.888 - 9264.527: 29.4744% ( 369) 00:09:50.993 9264.527 - 9317.166: 32.2996% ( 358) 00:09:50.993 9317.166 - 9369.806: 35.4324% ( 397) 00:09:50.993 9369.806 - 9422.445: 38.7863% ( 425) 00:09:50.993 9422.445 - 9475.084: 42.1323% ( 424) 00:09:50.993 9475.084 - 9527.724: 45.2730% ( 398) 00:09:50.993 9527.724 - 9580.363: 48.2955% ( 383) 00:09:50.993 9580.363 - 9633.002: 51.1758% ( 365) 00:09:50.993 9633.002 - 9685.642: 53.9299% ( 349) 00:09:50.993 9685.642 - 9738.281: 56.1395% ( 280) 00:09:50.993 9738.281 - 9790.920: 57.8993% ( 223) 00:09:50.993 9790.920 - 9843.560: 59.8169% ( 243) 00:09:50.993 9843.560 - 9896.199: 61.4504% ( 207) 00:09:50.993 9896.199 - 9948.839: 62.7841% ( 169) 00:09:50.993 9948.839 - 10001.478: 64.2124% ( 181) 00:09:50.993 10001.478 - 10054.117: 65.8144% ( 203) 00:09:50.993 10054.117 - 10106.757: 67.2033% ( 176) 00:09:50.993 10106.757 - 10159.396: 69.1525% ( 247) 00:09:50.993 10159.396 - 10212.035: 70.9280% ( 225) 00:09:50.993 10212.035 - 10264.675: 72.3248% ( 177) 00:09:50.993 10264.675 - 10317.314: 73.9899% ( 211) 00:09:50.993 10317.314 - 10369.953: 75.4498% ( 185) 00:09:50.993 10369.953 - 10422.593: 76.9413% ( 189) 00:09:50.994 10422.593 - 10475.232: 78.2828% ( 170) 00:09:50.994 10475.232 - 10527.871: 79.3166% ( 131) 00:09:50.994 10527.871 - 10580.511: 80.3662% ( 133) 00:09:50.994 10580.511 - 10633.150: 81.4078% ( 132) 00:09:50.994 10633.150 - 10685.790: 82.2443% ( 106) 00:09:50.994 10685.790 - 10738.429: 83.1518% ( 115) 00:09:50.994 10738.429 - 10791.068: 84.2014% ( 133) 00:09:50.994 10791.068 - 10843.708: 85.2115% ( 128) 00:09:50.994 10843.708 - 10896.347: 86.4268% ( 154) 00:09:50.994 10896.347 - 10948.986: 88.1155% ( 214) 00:09:50.994 10948.986 - 11001.626: 89.0388% ( 117) 00:09:50.994 11001.626 - 11054.265: 89.8438% ( 102) 00:09:50.994 11054.265 - 11106.904: 90.5145% ( 85) 00:09:50.994 11106.904 - 11159.544: 90.9643% ( 57) 00:09:50.994 11159.544 - 11212.183: 91.3273% ( 46) 00:09:50.994 11212.183 - 11264.822: 91.5956% ( 34) 00:09:50.994 11264.822 - 11317.462: 91.8718% ( 35) 00:09:50.994 11317.462 - 11370.101: 92.0218% ( 19) 00:09:50.994 11370.101 - 11422.741: 92.1402% ( 15) 00:09:50.994 11422.741 - 11475.380: 92.4953% ( 45) 00:09:50.994 11475.380 - 11528.019: 92.7004% ( 26) 00:09:50.994 11528.019 - 11580.659: 92.8425% ( 18) 00:09:50.994 11580.659 - 11633.298: 92.9609% ( 15) 00:09:50.994 11633.298 - 11685.937: 93.1029% ( 18) 00:09:50.994 11685.937 - 11738.577: 93.2844% ( 23) 00:09:50.994 11738.577 - 11791.216: 93.4422% ( 20) 00:09:50.994 11791.216 - 11843.855: 93.6080% ( 21) 00:09:50.994 11843.855 - 11896.495: 93.7737% ( 21) 00:09:50.994 11896.495 - 11949.134: 93.8999% ( 16) 00:09:50.994 11949.134 - 12001.773: 94.0104% ( 14) 00:09:50.994 12001.773 - 12054.413: 94.1288% ( 15) 00:09:50.994 12054.413 - 12107.052: 94.1919% ( 8) 00:09:50.994 12107.052 - 12159.692: 94.2472% ( 7) 00:09:50.994 12159.692 - 12212.331: 94.3024% ( 7) 00:09:50.994 12212.331 - 12264.970: 94.3419% ( 5) 00:09:50.994 12264.970 - 12317.610: 94.3813% ( 5) 00:09:50.994 12317.610 - 12370.249: 94.4050% ( 3) 00:09:50.994 12370.249 - 12422.888: 94.4287% ( 3) 00:09:50.994 12422.888 - 12475.528: 94.4366% ( 1) 00:09:50.994 12475.528 - 12528.167: 94.4444% ( 1) 00:09:50.994 12738.724 - 12791.364: 94.4523% ( 1) 00:09:50.994 12791.364 - 12844.003: 94.4602% ( 1) 00:09:50.994 12844.003 - 12896.643: 94.4918% ( 4) 00:09:50.994 12896.643 - 12949.282: 94.5470% ( 7) 00:09:50.994 12949.282 - 13001.921: 94.6023% ( 7) 00:09:50.994 13001.921 - 13054.561: 94.6496% ( 6) 00:09:50.994 13054.561 - 13107.200: 94.7128% ( 8) 00:09:50.994 13107.200 - 13159.839: 94.7680% ( 7) 00:09:50.994 13159.839 - 13212.479: 94.9890% ( 28) 00:09:50.994 13212.479 - 13265.118: 95.0363% ( 6) 00:09:50.994 13265.118 - 13317.757: 95.0994% ( 8) 00:09:50.994 13317.757 - 13370.397: 95.2020% ( 13) 00:09:50.994 13370.397 - 13423.036: 95.3125% ( 14) 00:09:50.994 13423.036 - 13475.676: 95.4388% ( 16) 00:09:50.994 13475.676 - 13580.954: 95.8254% ( 49) 00:09:50.994 13580.954 - 13686.233: 96.0859% ( 33) 00:09:50.994 13686.233 - 13791.512: 96.2753% ( 24) 00:09:50.994 13791.512 - 13896.790: 96.3857% ( 14) 00:09:50.994 13896.790 - 14002.069: 96.5830% ( 25) 00:09:50.994 14002.069 - 14107.348: 96.8040% ( 28) 00:09:50.994 14107.348 - 14212.627: 96.8829% ( 10) 00:09:50.994 14212.627 - 14317.905: 96.9381% ( 7) 00:09:50.994 14317.905 - 14423.184: 97.0013% ( 8) 00:09:50.994 14423.184 - 14528.463: 97.0644% ( 8) 00:09:50.994 14528.463 - 14633.741: 97.1828% ( 15) 00:09:50.994 14633.741 - 14739.020: 97.3879% ( 26) 00:09:50.994 14739.020 - 14844.299: 97.5537% ( 21) 00:09:50.994 14844.299 - 14949.578: 97.6168% ( 8) 00:09:50.994 14949.578 - 15054.856: 97.6720% ( 7) 00:09:50.994 15054.856 - 15160.135: 97.7273% ( 7) 00:09:50.994 15160.135 - 15265.414: 97.7825% ( 7) 00:09:50.994 15265.414 - 15370.692: 97.8456% ( 8) 00:09:50.994 15370.692 - 15475.971: 97.8851% ( 5) 00:09:50.994 15475.971 - 15581.250: 97.9246% ( 5) 00:09:50.994 15581.250 - 15686.529: 97.9719% ( 6) 00:09:50.994 15686.529 - 15791.807: 98.0745% ( 13) 00:09:50.994 15791.807 - 15897.086: 98.1455% ( 9) 00:09:50.994 15897.086 - 16002.365: 98.2244% ( 10) 00:09:50.994 16002.365 - 16107.643: 98.2639% ( 5) 00:09:50.994 16107.643 - 16212.922: 98.2955% ( 4) 00:09:50.994 16212.922 - 16318.201: 98.3270% ( 4) 00:09:50.994 16318.201 - 16423.480: 98.3665% ( 5) 00:09:50.994 16423.480 - 16528.758: 98.4138% ( 6) 00:09:50.994 16528.758 - 16634.037: 98.4612% ( 6) 00:09:50.994 16634.037 - 16739.316: 98.4848% ( 3) 00:09:50.994 16949.873 - 17055.152: 98.4927% ( 1) 00:09:50.994 17160.431 - 17265.709: 98.5638% ( 9) 00:09:50.994 17265.709 - 17370.988: 98.6742% ( 14) 00:09:50.994 17370.988 - 17476.267: 98.8242% ( 19) 00:09:50.994 17476.267 - 17581.545: 98.9189% ( 12) 00:09:50.994 17581.545 - 17686.824: 98.9820% ( 8) 00:09:50.994 17686.824 - 17792.103: 98.9899% ( 1) 00:09:50.994 24951.055 - 25056.334: 99.0451% ( 7) 00:09:50.994 25056.334 - 25161.613: 99.1004% ( 7) 00:09:50.994 25161.613 - 25266.892: 99.1556% ( 7) 00:09:50.994 25266.892 - 25372.170: 99.2109% ( 7) 00:09:50.994 25372.170 - 25477.449: 99.2582% ( 6) 00:09:50.994 25477.449 - 25582.728: 99.3134% ( 7) 00:09:50.994 25582.728 - 25688.006: 99.3687% ( 7) 00:09:50.994 25688.006 - 25793.285: 99.4239% ( 7) 00:09:50.994 25793.285 - 25898.564: 99.4792% ( 7) 00:09:50.994 25898.564 - 26003.843: 99.4949% ( 2) 00:09:50.994 31583.614 - 31794.172: 99.5107% ( 2) 00:09:50.994 31794.172 - 32004.729: 99.6212% ( 14) 00:09:50.994 32004.729 - 32215.287: 99.7238% ( 13) 00:09:50.994 32215.287 - 32425.844: 99.8185% ( 12) 00:09:50.994 32425.844 - 32636.402: 99.9211% ( 13) 00:09:50.994 32636.402 - 32846.959: 100.0000% ( 10) 00:09:50.994 00:09:50.994 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:50.994 ============================================================================== 00:09:50.994 Range in us Cumulative IO count 00:09:50.994 6000.887 - 6027.206: 0.0079% ( 1) 00:09:50.994 6106.165 - 6132.485: 0.0158% ( 1) 00:09:50.994 6132.485 - 6158.805: 0.0237% ( 1) 00:09:50.994 6158.805 - 6185.124: 0.0473% ( 3) 00:09:50.994 6185.124 - 6211.444: 0.0631% ( 2) 00:09:50.994 6211.444 - 6237.764: 0.0947% ( 4) 00:09:50.994 6237.764 - 6264.084: 0.1184% ( 3) 00:09:50.994 6264.084 - 6290.403: 0.1499% ( 4) 00:09:50.994 6290.403 - 6316.723: 0.1736% ( 3) 00:09:50.994 6316.723 - 6343.043: 0.2210% ( 6) 00:09:50.994 6343.043 - 6369.362: 0.2999% ( 10) 00:09:50.994 6369.362 - 6395.682: 0.3551% ( 7) 00:09:50.994 6395.682 - 6422.002: 0.3867% ( 4) 00:09:50.994 6422.002 - 6448.321: 0.4104% ( 3) 00:09:50.994 6448.321 - 6474.641: 0.4419% ( 4) 00:09:50.994 6474.641 - 6500.961: 0.4656% ( 3) 00:09:50.994 6500.961 - 6527.280: 0.4814% ( 2) 00:09:50.994 6527.280 - 6553.600: 0.4972% ( 2) 00:09:50.994 6553.600 - 6579.920: 0.5051% ( 1) 00:09:50.994 7685.346 - 7737.986: 0.5366% ( 4) 00:09:50.994 7737.986 - 7790.625: 0.5997% ( 8) 00:09:50.994 7790.625 - 7843.264: 0.6629% ( 8) 00:09:50.994 7843.264 - 7895.904: 0.7260% ( 8) 00:09:50.994 7895.904 - 7948.543: 0.8365% ( 14) 00:09:50.994 7948.543 - 8001.182: 0.8917% ( 7) 00:09:50.994 8001.182 - 8053.822: 0.9549% ( 8) 00:09:50.994 8053.822 - 8106.461: 1.0259% ( 9) 00:09:50.994 8106.461 - 8159.100: 1.0732% ( 6) 00:09:50.994 8159.100 - 8211.740: 1.1206% ( 6) 00:09:50.994 8211.740 - 8264.379: 1.2153% ( 12) 00:09:50.994 8264.379 - 8317.018: 1.3494% ( 17) 00:09:50.994 8317.018 - 8369.658: 1.6020% ( 32) 00:09:50.994 8369.658 - 8422.297: 1.7519% ( 19) 00:09:50.994 8422.297 - 8474.937: 2.0360% ( 36) 00:09:50.994 8474.937 - 8527.576: 2.5016% ( 59) 00:09:50.994 8527.576 - 8580.215: 2.9751% ( 60) 00:09:50.994 8580.215 - 8632.855: 3.6064% ( 80) 00:09:50.994 8632.855 - 8685.494: 4.4508% ( 107) 00:09:50.994 8685.494 - 8738.133: 5.3030% ( 108) 00:09:50.994 8738.133 - 8790.773: 6.5104% ( 153) 00:09:50.994 8790.773 - 8843.412: 8.4359% ( 244) 00:09:50.994 8843.412 - 8896.051: 10.5429% ( 267) 00:09:50.994 8896.051 - 8948.691: 13.2339% ( 341) 00:09:50.994 8948.691 - 9001.330: 15.7434% ( 318) 00:09:50.994 9001.330 - 9053.969: 18.4107% ( 338) 00:09:50.994 9053.969 - 9106.609: 20.6045% ( 278) 00:09:50.994 9106.609 - 9159.248: 23.3349% ( 346) 00:09:50.994 9159.248 - 9211.888: 26.0969% ( 350) 00:09:50.994 9211.888 - 9264.527: 28.7169% ( 332) 00:09:50.994 9264.527 - 9317.166: 31.8261% ( 394) 00:09:50.994 9317.166 - 9369.806: 35.4482% ( 459) 00:09:50.994 9369.806 - 9422.445: 38.1155% ( 338) 00:09:50.994 9422.445 - 9475.084: 41.2090% ( 392) 00:09:50.994 9475.084 - 9527.724: 44.2866% ( 390) 00:09:50.994 9527.724 - 9580.363: 47.2380% ( 374) 00:09:50.994 9580.363 - 9633.002: 50.2052% ( 376) 00:09:50.994 9633.002 - 9685.642: 52.6989% ( 316) 00:09:50.994 9685.642 - 9738.281: 54.5928% ( 240) 00:09:50.994 9738.281 - 9790.920: 56.6051% ( 255) 00:09:50.994 9790.920 - 9843.560: 58.4754% ( 237) 00:09:50.994 9843.560 - 9896.199: 60.0063% ( 194) 00:09:50.994 9896.199 - 9948.839: 61.7740% ( 224) 00:09:50.994 9948.839 - 10001.478: 63.3523% ( 200) 00:09:50.994 10001.478 - 10054.117: 65.0963% ( 221) 00:09:50.994 10054.117 - 10106.757: 66.8876% ( 227) 00:09:50.994 10106.757 - 10159.396: 69.0025% ( 268) 00:09:50.994 10159.396 - 10212.035: 70.6676% ( 211) 00:09:50.994 10212.035 - 10264.675: 72.2932% ( 206) 00:09:50.994 10264.675 - 10317.314: 73.8084% ( 192) 00:09:50.994 10317.314 - 10369.953: 75.5761% ( 224) 00:09:50.994 10369.953 - 10422.593: 76.8860% ( 166) 00:09:50.994 10422.593 - 10475.232: 78.3539% ( 186) 00:09:50.994 10475.232 - 10527.871: 79.6559% ( 165) 00:09:50.994 10527.871 - 10580.511: 81.0764% ( 180) 00:09:50.994 10580.511 - 10633.150: 82.2680% ( 151) 00:09:50.994 10633.150 - 10685.790: 83.4044% ( 144) 00:09:50.994 10685.790 - 10738.429: 84.4539% ( 133) 00:09:50.994 10738.429 - 10791.068: 85.8349% ( 175) 00:09:50.994 10791.068 - 10843.708: 86.8371% ( 127) 00:09:50.994 10843.708 - 10896.347: 87.9577% ( 142) 00:09:50.994 10896.347 - 10948.986: 89.1572% ( 152) 00:09:50.994 10948.986 - 11001.626: 90.2778% ( 142) 00:09:50.994 11001.626 - 11054.265: 91.2879% ( 128) 00:09:50.994 11054.265 - 11106.904: 91.9113% ( 79) 00:09:50.994 11106.904 - 11159.544: 92.4321% ( 66) 00:09:50.994 11159.544 - 11212.183: 92.7951% ( 46) 00:09:50.994 11212.183 - 11264.822: 93.0792% ( 36) 00:09:50.995 11264.822 - 11317.462: 93.3002% ( 28) 00:09:50.995 11317.462 - 11370.101: 93.5290% ( 29) 00:09:50.995 11370.101 - 11422.741: 93.6632% ( 17) 00:09:50.995 11422.741 - 11475.380: 93.7816% ( 15) 00:09:50.995 11475.380 - 11528.019: 93.8684% ( 11) 00:09:50.995 11528.019 - 11580.659: 93.9631% ( 12) 00:09:50.995 11580.659 - 11633.298: 94.0420% ( 10) 00:09:50.995 11633.298 - 11685.937: 94.2156% ( 22) 00:09:50.995 11685.937 - 11738.577: 94.2787% ( 8) 00:09:50.995 11738.577 - 11791.216: 94.3261% ( 6) 00:09:50.995 11791.216 - 11843.855: 94.3734% ( 6) 00:09:50.995 11843.855 - 11896.495: 94.4050% ( 4) 00:09:50.995 11896.495 - 11949.134: 94.4366% ( 4) 00:09:50.995 11949.134 - 12001.773: 94.4444% ( 1) 00:09:50.995 12738.724 - 12791.364: 94.4523% ( 1) 00:09:50.995 12791.364 - 12844.003: 94.4760% ( 3) 00:09:50.995 12844.003 - 12896.643: 94.5155% ( 5) 00:09:50.995 12896.643 - 12949.282: 94.5470% ( 4) 00:09:50.995 12949.282 - 13001.921: 94.5786% ( 4) 00:09:50.995 13001.921 - 13054.561: 94.6181% ( 5) 00:09:50.995 13054.561 - 13107.200: 94.6417% ( 3) 00:09:50.995 13107.200 - 13159.839: 94.7917% ( 19) 00:09:50.995 13159.839 - 13212.479: 94.8469% ( 7) 00:09:50.995 13212.479 - 13265.118: 94.8706% ( 3) 00:09:50.995 13265.118 - 13317.757: 94.8943% ( 3) 00:09:50.995 13317.757 - 13370.397: 94.9258% ( 4) 00:09:50.995 13370.397 - 13423.036: 94.9653% ( 5) 00:09:50.995 13423.036 - 13475.676: 95.0442% ( 10) 00:09:50.995 13475.676 - 13580.954: 95.2415% ( 25) 00:09:50.995 13580.954 - 13686.233: 95.4940% ( 32) 00:09:50.995 13686.233 - 13791.512: 95.6045% ( 14) 00:09:50.995 13791.512 - 13896.790: 95.7702% ( 21) 00:09:50.995 13896.790 - 14002.069: 95.9280% ( 20) 00:09:50.995 14002.069 - 14107.348: 96.0701% ( 18) 00:09:50.995 14107.348 - 14212.627: 96.3305% ( 33) 00:09:50.995 14212.627 - 14317.905: 96.8908% ( 71) 00:09:50.995 14317.905 - 14423.184: 97.1670% ( 35) 00:09:50.995 14423.184 - 14528.463: 97.2775% ( 14) 00:09:50.995 14528.463 - 14633.741: 97.3248% ( 6) 00:09:50.995 14633.741 - 14739.020: 97.3801% ( 7) 00:09:50.995 14739.020 - 14844.299: 97.4590% ( 10) 00:09:50.995 14844.299 - 14949.578: 97.5142% ( 7) 00:09:50.995 14949.578 - 15054.856: 97.5616% ( 6) 00:09:50.995 15054.856 - 15160.135: 97.6089% ( 6) 00:09:50.995 15160.135 - 15265.414: 97.6720% ( 8) 00:09:50.995 15265.414 - 15370.692: 97.8299% ( 20) 00:09:50.995 15370.692 - 15475.971: 97.9719% ( 18) 00:09:50.995 15475.971 - 15581.250: 98.0903% ( 15) 00:09:50.995 15581.250 - 15686.529: 98.1850% ( 12) 00:09:50.995 15686.529 - 15791.807: 98.2402% ( 7) 00:09:50.995 15791.807 - 15897.086: 98.2876% ( 6) 00:09:50.995 15897.086 - 16002.365: 98.3270% ( 5) 00:09:50.995 16002.365 - 16107.643: 98.3744% ( 6) 00:09:50.995 16107.643 - 16212.922: 98.4217% ( 6) 00:09:50.995 16212.922 - 16318.201: 98.4612% ( 5) 00:09:50.995 16318.201 - 16423.480: 98.4848% ( 3) 00:09:50.995 17476.267 - 17581.545: 98.5717% ( 11) 00:09:50.995 17581.545 - 17686.824: 98.6032% ( 4) 00:09:50.995 17686.824 - 17792.103: 98.6585% ( 7) 00:09:50.995 17792.103 - 17897.382: 98.7058% ( 6) 00:09:50.995 17897.382 - 18002.660: 98.7532% ( 6) 00:09:50.995 18002.660 - 18107.939: 98.8084% ( 7) 00:09:50.995 18107.939 - 18213.218: 98.8557% ( 6) 00:09:50.995 18213.218 - 18318.496: 98.9031% ( 6) 00:09:50.995 18318.496 - 18423.775: 98.9583% ( 7) 00:09:50.995 18423.775 - 18529.054: 98.9899% ( 4) 00:09:50.995 24635.219 - 24740.498: 98.9978% ( 1) 00:09:50.995 24740.498 - 24845.777: 99.0451% ( 6) 00:09:50.995 24845.777 - 24951.055: 99.1004% ( 7) 00:09:50.995 24951.055 - 25056.334: 99.1556% ( 7) 00:09:50.995 25056.334 - 25161.613: 99.2030% ( 6) 00:09:50.995 25161.613 - 25266.892: 99.2582% ( 7) 00:09:50.995 25266.892 - 25372.170: 99.3056% ( 6) 00:09:50.995 25372.170 - 25477.449: 99.3608% ( 7) 00:09:50.995 25477.449 - 25582.728: 99.4160% ( 7) 00:09:50.995 25582.728 - 25688.006: 99.4713% ( 7) 00:09:50.995 25688.006 - 25793.285: 99.4949% ( 3) 00:09:50.995 31373.057 - 31583.614: 99.5423% ( 6) 00:09:50.995 31583.614 - 31794.172: 99.6449% ( 13) 00:09:50.995 31794.172 - 32004.729: 99.7475% ( 13) 00:09:50.995 32004.729 - 32215.287: 99.8580% ( 14) 00:09:50.995 32215.287 - 32425.844: 99.9527% ( 12) 00:09:50.995 32425.844 - 32636.402: 100.0000% ( 6) 00:09:50.995 00:09:50.995 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:50.995 ============================================================================== 00:09:50.995 Range in us Cumulative IO count 00:09:50.995 5842.969 - 5869.288: 0.0079% ( 1) 00:09:50.995 6027.206 - 6053.526: 0.0157% ( 1) 00:09:50.995 6053.526 - 6079.846: 0.0314% ( 2) 00:09:50.995 6079.846 - 6106.165: 0.0471% ( 2) 00:09:50.995 6106.165 - 6132.485: 0.0707% ( 3) 00:09:50.995 6132.485 - 6158.805: 0.1413% ( 9) 00:09:50.995 6158.805 - 6185.124: 0.2905% ( 19) 00:09:50.995 6185.124 - 6211.444: 0.3926% ( 13) 00:09:50.995 6211.444 - 6237.764: 0.4083% ( 2) 00:09:50.995 6237.764 - 6264.084: 0.4240% ( 2) 00:09:50.995 6264.084 - 6290.403: 0.4318% ( 1) 00:09:50.995 6290.403 - 6316.723: 0.4476% ( 2) 00:09:50.995 6316.723 - 6343.043: 0.4633% ( 2) 00:09:50.995 6343.043 - 6369.362: 0.4790% ( 2) 00:09:50.995 6369.362 - 6395.682: 0.4868% ( 1) 00:09:50.995 6395.682 - 6422.002: 0.5025% ( 2) 00:09:50.995 7580.067 - 7632.707: 0.5104% ( 1) 00:09:50.995 7685.346 - 7737.986: 0.5496% ( 5) 00:09:50.995 7737.986 - 7790.625: 0.5967% ( 6) 00:09:50.995 7790.625 - 7843.264: 0.6595% ( 8) 00:09:50.995 7843.264 - 7895.904: 0.7538% ( 12) 00:09:50.995 7895.904 - 7948.543: 0.8794% ( 16) 00:09:50.995 7948.543 - 8001.182: 0.9501% ( 9) 00:09:50.995 8001.182 - 8053.822: 1.0050% ( 7) 00:09:50.995 8106.461 - 8159.100: 1.0521% ( 6) 00:09:50.995 8159.100 - 8211.740: 1.1228% ( 9) 00:09:50.995 8211.740 - 8264.379: 1.2406% ( 15) 00:09:50.995 8264.379 - 8317.018: 1.4290% ( 24) 00:09:50.995 8317.018 - 8369.658: 1.5546% ( 16) 00:09:50.995 8369.658 - 8422.297: 1.7431% ( 24) 00:09:50.995 8422.297 - 8474.937: 2.0807% ( 43) 00:09:50.995 8474.937 - 8527.576: 2.4655% ( 49) 00:09:50.995 8527.576 - 8580.215: 2.9052% ( 56) 00:09:50.995 8580.215 - 8632.855: 3.5176% ( 78) 00:09:50.995 8632.855 - 8685.494: 4.1771% ( 84) 00:09:50.995 8685.494 - 8738.133: 5.0330% ( 109) 00:09:50.995 8738.133 - 8790.773: 6.4463% ( 180) 00:09:50.995 8790.773 - 8843.412: 7.9381% ( 190) 00:09:50.995 8843.412 - 8896.051: 9.7440% ( 230) 00:09:50.995 8896.051 - 8948.691: 11.9661% ( 283) 00:09:50.995 8948.691 - 9001.330: 14.3766% ( 307) 00:09:50.995 9001.330 - 9053.969: 16.9912% ( 333) 00:09:50.995 9053.969 - 9106.609: 19.9749% ( 380) 00:09:50.995 9106.609 - 9159.248: 23.2805% ( 421) 00:09:50.995 9159.248 - 9211.888: 26.3348% ( 389) 00:09:50.995 9211.888 - 9264.527: 29.2164% ( 367) 00:09:50.995 9264.527 - 9317.166: 32.5455% ( 424) 00:09:50.995 9317.166 - 9369.806: 35.7412% ( 407) 00:09:50.995 9369.806 - 9422.445: 38.9604% ( 410) 00:09:50.995 9422.445 - 9475.084: 42.1954% ( 412) 00:09:50.995 9475.084 - 9527.724: 44.9435% ( 350) 00:09:50.995 9527.724 - 9580.363: 48.2726% ( 424) 00:09:50.995 9580.363 - 9633.002: 50.7224% ( 312) 00:09:50.995 9633.002 - 9685.642: 52.8816% ( 275) 00:09:50.995 9685.642 - 9738.281: 54.7189% ( 234) 00:09:50.995 9738.281 - 9790.920: 56.3756% ( 211) 00:09:50.995 9790.920 - 9843.560: 58.3935% ( 257) 00:09:50.995 9843.560 - 9896.199: 60.1445% ( 223) 00:09:50.995 9896.199 - 9948.839: 61.7855% ( 209) 00:09:50.995 9948.839 - 10001.478: 63.9133% ( 271) 00:09:50.995 10001.478 - 10054.117: 65.6878% ( 226) 00:09:50.995 10054.117 - 10106.757: 67.4702% ( 227) 00:09:50.995 10106.757 - 10159.396: 69.8021% ( 297) 00:09:50.995 10159.396 - 10212.035: 71.6787% ( 239) 00:09:50.995 10212.035 - 10264.675: 73.2098% ( 195) 00:09:50.995 10264.675 - 10317.314: 74.6310% ( 181) 00:09:50.995 10317.314 - 10369.953: 75.7224% ( 139) 00:09:50.995 10369.953 - 10422.593: 76.8059% ( 138) 00:09:50.995 10422.593 - 10475.232: 77.7560% ( 121) 00:09:50.995 10475.232 - 10527.871: 78.7845% ( 131) 00:09:50.995 10527.871 - 10580.511: 79.9780% ( 152) 00:09:50.995 10580.511 - 10633.150: 81.3050% ( 169) 00:09:50.995 10633.150 - 10685.790: 82.7183% ( 180) 00:09:50.995 10685.790 - 10738.429: 84.0374% ( 168) 00:09:50.996 10738.429 - 10791.068: 85.3015% ( 161) 00:09:50.996 10791.068 - 10843.708: 86.6206% ( 168) 00:09:50.996 10843.708 - 10896.347: 87.9318% ( 167) 00:09:50.996 10896.347 - 10948.986: 89.1803% ( 159) 00:09:50.996 10948.986 - 11001.626: 90.3109% ( 144) 00:09:50.996 11001.626 - 11054.265: 91.0804% ( 98) 00:09:50.996 11054.265 - 11106.904: 91.5908% ( 65) 00:09:50.996 11106.904 - 11159.544: 91.9284% ( 43) 00:09:50.996 11159.544 - 11212.183: 92.1404% ( 27) 00:09:50.996 11212.183 - 11264.822: 92.3131% ( 22) 00:09:50.996 11264.822 - 11317.462: 92.5173% ( 26) 00:09:50.996 11317.462 - 11370.101: 92.6979% ( 23) 00:09:50.996 11370.101 - 11422.741: 92.8549% ( 20) 00:09:50.996 11422.741 - 11475.380: 93.1297% ( 35) 00:09:50.996 11475.380 - 11528.019: 93.3024% ( 22) 00:09:50.996 11528.019 - 11580.659: 93.4673% ( 21) 00:09:50.996 11580.659 - 11633.298: 93.7343% ( 34) 00:09:50.996 11633.298 - 11685.937: 93.8207% ( 11) 00:09:50.996 11685.937 - 11738.577: 93.8756% ( 7) 00:09:50.996 11738.577 - 11791.216: 93.9070% ( 4) 00:09:50.996 11791.216 - 11843.855: 93.9384% ( 4) 00:09:50.996 11843.855 - 11896.495: 93.9541% ( 2) 00:09:50.996 11896.495 - 11949.134: 93.9698% ( 2) 00:09:50.996 12212.331 - 12264.970: 93.9777% ( 1) 00:09:50.996 12370.249 - 12422.888: 93.9856% ( 1) 00:09:50.996 12475.528 - 12528.167: 93.9934% ( 1) 00:09:50.996 12528.167 - 12580.806: 94.0327% ( 5) 00:09:50.996 12580.806 - 12633.446: 94.0719% ( 5) 00:09:50.996 12633.446 - 12686.085: 94.1583% ( 11) 00:09:50.996 12686.085 - 12738.724: 94.2368% ( 10) 00:09:50.996 12738.724 - 12791.364: 94.3546% ( 15) 00:09:50.996 12791.364 - 12844.003: 94.5195% ( 21) 00:09:50.996 12844.003 - 12896.643: 94.6215% ( 13) 00:09:50.996 12896.643 - 12949.282: 94.7236% ( 13) 00:09:50.996 12949.282 - 13001.921: 94.8021% ( 10) 00:09:50.996 13001.921 - 13054.561: 94.8492% ( 6) 00:09:50.996 13054.561 - 13107.200: 94.9278% ( 10) 00:09:50.996 13107.200 - 13159.839: 95.0769% ( 19) 00:09:50.996 13159.839 - 13212.479: 95.1712% ( 12) 00:09:50.996 13212.479 - 13265.118: 95.2654% ( 12) 00:09:50.996 13265.118 - 13317.757: 95.3518% ( 11) 00:09:50.996 13317.757 - 13370.397: 95.3910% ( 5) 00:09:50.996 13370.397 - 13423.036: 95.4067% ( 2) 00:09:50.996 13423.036 - 13475.676: 95.4146% ( 1) 00:09:50.996 13475.676 - 13580.954: 95.4460% ( 4) 00:09:50.996 13580.954 - 13686.233: 95.4774% ( 4) 00:09:50.996 13686.233 - 13791.512: 95.5716% ( 12) 00:09:50.996 13791.512 - 13896.790: 95.8072% ( 30) 00:09:50.996 13896.790 - 14002.069: 95.9956% ( 24) 00:09:50.996 14002.069 - 14107.348: 96.0741% ( 10) 00:09:50.996 14107.348 - 14212.627: 96.1369% ( 8) 00:09:50.996 14212.627 - 14317.905: 96.2626% ( 16) 00:09:50.996 14317.905 - 14423.184: 96.4196% ( 20) 00:09:50.996 14423.184 - 14528.463: 96.5766% ( 20) 00:09:50.996 14528.463 - 14633.741: 96.7494% ( 22) 00:09:50.996 14633.741 - 14739.020: 97.0006% ( 32) 00:09:50.996 14739.020 - 14844.299: 97.2597% ( 33) 00:09:50.996 14844.299 - 14949.578: 97.6209% ( 46) 00:09:50.996 14949.578 - 15054.856: 97.9350% ( 40) 00:09:50.996 15054.856 - 15160.135: 98.0920% ( 20) 00:09:50.996 15160.135 - 15265.414: 98.1784% ( 11) 00:09:50.996 15265.414 - 15370.692: 98.2412% ( 8) 00:09:50.996 15370.692 - 15475.971: 98.3040% ( 8) 00:09:50.996 15475.971 - 15581.250: 98.3747% ( 9) 00:09:50.996 15581.250 - 15686.529: 98.4218% ( 6) 00:09:50.996 15686.529 - 15791.807: 98.4689% ( 6) 00:09:50.996 15791.807 - 15897.086: 98.4925% ( 3) 00:09:50.996 17897.382 - 18002.660: 98.5003% ( 1) 00:09:50.996 18002.660 - 18107.939: 98.5082% ( 1) 00:09:50.996 18107.939 - 18213.218: 98.6102% ( 13) 00:09:50.996 18213.218 - 18318.496: 98.7123% ( 13) 00:09:50.996 18318.496 - 18423.775: 98.8301% ( 15) 00:09:50.996 18423.775 - 18529.054: 98.9243% ( 12) 00:09:50.996 18529.054 - 18634.333: 99.0107% ( 11) 00:09:50.996 18634.333 - 18739.611: 99.0892% ( 10) 00:09:50.996 18739.611 - 18844.890: 99.1834% ( 12) 00:09:50.996 18844.890 - 18950.169: 99.2855% ( 13) 00:09:50.996 18950.169 - 19055.447: 99.3405% ( 7) 00:09:50.996 19055.447 - 19160.726: 99.3954% ( 7) 00:09:50.996 19160.726 - 19266.005: 99.4504% ( 7) 00:09:50.996 19266.005 - 19371.284: 99.4975% ( 6) 00:09:50.996 24951.055 - 25056.334: 99.5210% ( 3) 00:09:50.996 25056.334 - 25161.613: 99.5760% ( 7) 00:09:50.996 25161.613 - 25266.892: 99.6231% ( 6) 00:09:50.996 25266.892 - 25372.170: 99.6702% ( 6) 00:09:50.996 25372.170 - 25477.449: 99.7252% ( 7) 00:09:50.996 25477.449 - 25582.728: 99.7802% ( 7) 00:09:50.996 25582.728 - 25688.006: 99.8351% ( 7) 00:09:50.996 25688.006 - 25793.285: 99.8901% ( 7) 00:09:50.996 25793.285 - 25898.564: 99.9450% ( 7) 00:09:50.996 25898.564 - 26003.843: 99.9921% ( 6) 00:09:50.996 26003.843 - 26109.121: 100.0000% ( 1) 00:09:50.996 00:09:50.996 ************************************ 00:09:50.996 END TEST nvme_perf 00:09:50.996 ************************************ 00:09:50.996 15:45:25 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:50.996 00:09:50.996 real 0m2.530s 00:09:50.996 user 0m2.187s 00:09:50.996 sys 0m0.244s 00:09:50.996 15:45:25 nvme.nvme_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:50.996 15:45:25 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:50.996 15:45:25 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:50.996 15:45:25 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:50.996 15:45:25 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:50.996 15:45:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.996 ************************************ 00:09:50.996 START TEST nvme_hello_world 00:09:50.996 ************************************ 00:09:50.996 15:45:25 nvme.nvme_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:51.254 Initializing NVMe Controllers 00:09:51.254 Attached to 0000:00:10.0 00:09:51.254 Namespace ID: 1 size: 6GB 00:09:51.254 Attached to 0000:00:11.0 00:09:51.254 Namespace ID: 1 size: 5GB 00:09:51.254 Attached to 0000:00:13.0 00:09:51.254 Namespace ID: 1 size: 1GB 00:09:51.254 Attached to 0000:00:12.0 00:09:51.254 Namespace ID: 1 size: 4GB 00:09:51.254 Namespace ID: 2 size: 4GB 00:09:51.254 Namespace ID: 3 size: 4GB 00:09:51.254 Initialization complete. 00:09:51.254 INFO: using host memory buffer for IO 00:09:51.254 Hello world! 00:09:51.254 INFO: using host memory buffer for IO 00:09:51.254 Hello world! 00:09:51.254 INFO: using host memory buffer for IO 00:09:51.254 Hello world! 00:09:51.254 INFO: using host memory buffer for IO 00:09:51.254 Hello world! 00:09:51.254 INFO: using host memory buffer for IO 00:09:51.254 Hello world! 00:09:51.254 INFO: using host memory buffer for IO 00:09:51.254 Hello world! 00:09:51.254 ************************************ 00:09:51.254 END TEST nvme_hello_world 00:09:51.254 ************************************ 00:09:51.254 00:09:51.254 real 0m0.253s 00:09:51.254 user 0m0.088s 00:09:51.254 sys 0m0.116s 00:09:51.254 15:45:25 nvme.nvme_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:51.254 15:45:25 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:51.254 15:45:25 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:51.254 15:45:25 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:51.254 15:45:25 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:51.254 15:45:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.254 ************************************ 00:09:51.254 START TEST nvme_sgl 00:09:51.254 ************************************ 00:09:51.254 15:45:25 nvme.nvme_sgl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:51.513 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:51.513 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:51.513 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:51.513 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:51.513 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:51.513 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:51.513 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:51.513 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:51.513 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:51.513 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:51.513 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:51.513 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:51.513 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:51.513 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:51.513 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:51.513 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:51.513 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:51.513 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:51.513 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:51.513 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:51.513 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:51.513 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:51.513 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:51.513 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:51.513 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:51.513 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:51.513 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:51.513 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:51.513 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:51.513 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:51.513 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:51.513 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:51.513 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:51.513 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:51.513 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:51.513 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:51.513 NVMe Readv/Writev Request test 00:09:51.513 Attached to 0000:00:10.0 00:09:51.513 Attached to 0000:00:11.0 00:09:51.513 Attached to 0000:00:13.0 00:09:51.513 Attached to 0000:00:12.0 00:09:51.513 0000:00:10.0: build_io_request_2 test passed 00:09:51.513 0000:00:10.0: build_io_request_4 test passed 00:09:51.513 0000:00:10.0: build_io_request_5 test passed 00:09:51.513 0000:00:10.0: build_io_request_6 test passed 00:09:51.513 0000:00:10.0: build_io_request_7 test passed 00:09:51.513 0000:00:10.0: build_io_request_10 test passed 00:09:51.513 0000:00:11.0: build_io_request_2 test passed 00:09:51.513 0000:00:11.0: build_io_request_4 test passed 00:09:51.513 0000:00:11.0: build_io_request_5 test passed 00:09:51.513 0000:00:11.0: build_io_request_6 test passed 00:09:51.513 0000:00:11.0: build_io_request_7 test passed 00:09:51.513 0000:00:11.0: build_io_request_10 test passed 00:09:51.513 Cleaning up... 00:09:51.513 ************************************ 00:09:51.513 END TEST nvme_sgl 00:09:51.513 ************************************ 00:09:51.513 00:09:51.513 real 0m0.299s 00:09:51.513 user 0m0.136s 00:09:51.513 sys 0m0.113s 00:09:51.513 15:45:26 nvme.nvme_sgl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:51.513 15:45:26 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:51.513 15:45:26 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:51.513 15:45:26 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:51.513 15:45:26 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:51.513 15:45:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.513 ************************************ 00:09:51.513 START TEST nvme_e2edp 00:09:51.513 ************************************ 00:09:51.513 15:45:26 nvme.nvme_e2edp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:51.770 NVMe Write/Read with End-to-End data protection test 00:09:51.770 Attached to 0000:00:10.0 00:09:51.770 Attached to 0000:00:11.0 00:09:51.770 Attached to 0000:00:13.0 00:09:51.770 Attached to 0000:00:12.0 00:09:51.770 Cleaning up... 00:09:51.770 ************************************ 00:09:51.770 END TEST nvme_e2edp 00:09:51.770 ************************************ 00:09:51.770 00:09:51.770 real 0m0.244s 00:09:51.770 user 0m0.082s 00:09:51.770 sys 0m0.115s 00:09:51.770 15:45:26 nvme.nvme_e2edp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:51.770 15:45:26 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:52.029 15:45:26 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:52.029 15:45:26 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:52.029 15:45:26 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:52.029 15:45:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:52.029 ************************************ 00:09:52.029 START TEST nvme_reserve 00:09:52.029 ************************************ 00:09:52.029 15:45:26 nvme.nvme_reserve -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:52.029 ===================================================== 00:09:52.029 NVMe Controller at PCI bus 0, device 16, function 0 00:09:52.029 ===================================================== 00:09:52.029 Reservations: Not Supported 00:09:52.029 ===================================================== 00:09:52.029 NVMe Controller at PCI bus 0, device 17, function 0 00:09:52.029 ===================================================== 00:09:52.029 Reservations: Not Supported 00:09:52.029 ===================================================== 00:09:52.029 NVMe Controller at PCI bus 0, device 19, function 0 00:09:52.029 ===================================================== 00:09:52.029 Reservations: Not Supported 00:09:52.029 ===================================================== 00:09:52.029 NVMe Controller at PCI bus 0, device 18, function 0 00:09:52.029 ===================================================== 00:09:52.029 Reservations: Not Supported 00:09:52.029 Reservation test passed 00:09:52.288 ************************************ 00:09:52.288 END TEST nvme_reserve 00:09:52.288 ************************************ 00:09:52.288 00:09:52.288 real 0m0.242s 00:09:52.288 user 0m0.087s 00:09:52.288 sys 0m0.108s 00:09:52.288 15:45:26 nvme.nvme_reserve -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:52.288 15:45:26 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:52.288 15:45:26 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:52.288 15:45:26 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:52.288 15:45:26 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:52.288 15:45:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:52.288 ************************************ 00:09:52.288 START TEST nvme_err_injection 00:09:52.288 ************************************ 00:09:52.288 15:45:26 nvme.nvme_err_injection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:52.546 NVMe Error Injection test 00:09:52.546 Attached to 0000:00:10.0 00:09:52.546 Attached to 0000:00:11.0 00:09:52.546 Attached to 0000:00:13.0 00:09:52.546 Attached to 0000:00:12.0 00:09:52.546 0000:00:10.0: get features failed as expected 00:09:52.546 0000:00:11.0: get features failed as expected 00:09:52.546 0000:00:13.0: get features failed as expected 00:09:52.546 0000:00:12.0: get features failed as expected 00:09:52.546 0000:00:10.0: get features successfully as expected 00:09:52.546 0000:00:11.0: get features successfully as expected 00:09:52.546 0000:00:13.0: get features successfully as expected 00:09:52.546 0000:00:12.0: get features successfully as expected 00:09:52.546 0000:00:10.0: read failed as expected 00:09:52.546 0000:00:11.0: read failed as expected 00:09:52.546 0000:00:13.0: read failed as expected 00:09:52.546 0000:00:12.0: read failed as expected 00:09:52.546 0000:00:12.0: read successfully as expected 00:09:52.546 0000:00:10.0: read successfully as expected 00:09:52.546 0000:00:11.0: read successfully as expected 00:09:52.546 0000:00:13.0: read successfully as expected 00:09:52.546 Cleaning up... 00:09:52.546 ************************************ 00:09:52.546 END TEST nvme_err_injection 00:09:52.546 ************************************ 00:09:52.546 00:09:52.546 real 0m0.253s 00:09:52.546 user 0m0.074s 00:09:52.546 sys 0m0.130s 00:09:52.546 15:45:27 nvme.nvme_err_injection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:52.546 15:45:27 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:52.546 15:45:27 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:52.546 15:45:27 nvme -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:52.546 15:45:27 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:52.546 15:45:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:52.546 ************************************ 00:09:52.546 START TEST nvme_overhead 00:09:52.546 ************************************ 00:09:52.546 15:45:27 nvme.nvme_overhead -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:53.923 Initializing NVMe Controllers 00:09:53.923 Attached to 0000:00:10.0 00:09:53.923 Attached to 0000:00:11.0 00:09:53.923 Attached to 0000:00:13.0 00:09:53.923 Attached to 0000:00:12.0 00:09:53.923 Initialization complete. Launching workers. 00:09:53.923 submit (in ns) avg, min, max = 14012.6, 11869.9, 101927.7 00:09:53.923 complete (in ns) avg, min, max = 8482.5, 7763.1, 100626.5 00:09:53.923 00:09:53.923 Submit histogram 00:09:53.923 ================ 00:09:53.923 Range in us Cumulative Count 00:09:53.923 11.823 - 11.875: 0.0176% ( 1) 00:09:53.923 11.926 - 11.978: 0.0351% ( 1) 00:09:53.923 11.978 - 12.029: 0.1229% ( 5) 00:09:53.923 12.029 - 12.080: 0.1580% ( 2) 00:09:53.923 12.080 - 12.132: 0.1756% ( 1) 00:09:53.923 12.132 - 12.183: 0.2283% ( 3) 00:09:53.923 12.235 - 12.286: 0.2634% ( 2) 00:09:53.923 12.286 - 12.337: 0.3336% ( 4) 00:09:53.923 12.337 - 12.389: 0.4039% ( 4) 00:09:53.923 12.389 - 12.440: 0.4917% ( 5) 00:09:53.923 12.440 - 12.492: 0.7550% ( 15) 00:09:53.923 12.492 - 12.543: 1.0887% ( 19) 00:09:53.923 12.543 - 12.594: 1.5277% ( 25) 00:09:53.923 12.594 - 12.646: 2.5812% ( 60) 00:09:53.923 12.646 - 12.697: 3.6874% ( 63) 00:09:53.923 12.697 - 12.749: 5.4258% ( 99) 00:09:53.923 12.749 - 12.800: 7.7612% ( 133) 00:09:53.923 12.800 - 12.851: 10.2195% ( 140) 00:09:53.923 12.851 - 12.903: 12.3968% ( 124) 00:09:53.923 12.903 - 12.954: 14.8551% ( 140) 00:09:53.923 12.954 - 13.006: 16.9974% ( 122) 00:09:53.923 13.006 - 13.057: 18.9113% ( 109) 00:09:53.923 13.057 - 13.108: 20.5795% ( 95) 00:09:53.923 13.108 - 13.160: 22.1949% ( 92) 00:09:53.923 13.160 - 13.263: 25.5312% ( 190) 00:09:53.923 13.263 - 13.365: 27.9719% ( 139) 00:09:53.923 13.365 - 13.468: 29.8507% ( 107) 00:09:53.923 13.468 - 13.571: 33.4504% ( 205) 00:09:53.923 13.571 - 13.674: 39.7015% ( 356) 00:09:53.923 13.674 - 13.777: 49.1484% ( 538) 00:09:53.923 13.777 - 13.880: 58.5601% ( 536) 00:09:53.923 13.880 - 13.982: 67.8841% ( 531) 00:09:53.923 13.982 - 14.085: 76.1545% ( 471) 00:09:53.923 14.085 - 14.188: 82.6339% ( 369) 00:09:53.923 14.188 - 14.291: 87.6734% ( 287) 00:09:53.923 14.291 - 14.394: 90.3600% ( 153) 00:09:53.923 14.394 - 14.496: 92.3793% ( 115) 00:09:53.923 14.496 - 14.599: 93.5733% ( 68) 00:09:53.923 14.599 - 14.702: 93.9772% ( 23) 00:09:53.923 14.702 - 14.805: 94.1879% ( 12) 00:09:53.923 14.805 - 14.908: 94.3986% ( 12) 00:09:53.923 14.908 - 15.010: 94.4513% ( 3) 00:09:53.923 15.010 - 15.113: 94.5215% ( 4) 00:09:53.923 15.113 - 15.216: 94.5917% ( 4) 00:09:53.923 15.216 - 15.319: 94.6093% ( 1) 00:09:53.923 15.319 - 15.422: 94.6269% ( 1) 00:09:53.923 15.422 - 15.524: 94.6620% ( 2) 00:09:53.923 15.524 - 15.627: 94.6971% ( 2) 00:09:53.923 16.450 - 16.553: 94.7322% ( 2) 00:09:53.923 16.964 - 17.067: 94.7673% ( 2) 00:09:53.923 17.067 - 17.169: 94.7849% ( 1) 00:09:53.923 17.272 - 17.375: 94.8200% ( 2) 00:09:53.923 17.375 - 17.478: 94.8551% ( 2) 00:09:53.923 17.684 - 17.786: 94.9429% ( 5) 00:09:53.923 17.786 - 17.889: 95.0483% ( 6) 00:09:53.923 17.889 - 17.992: 95.1185% ( 4) 00:09:53.923 17.992 - 18.095: 95.1888% ( 4) 00:09:53.923 18.095 - 18.198: 95.2414% ( 3) 00:09:53.923 18.198 - 18.300: 95.4522% ( 12) 00:09:53.923 18.300 - 18.403: 95.6102% ( 9) 00:09:53.923 18.403 - 18.506: 95.6980% ( 5) 00:09:53.923 18.506 - 18.609: 95.8385% ( 8) 00:09:53.923 18.609 - 18.712: 95.9965% ( 9) 00:09:53.923 18.712 - 18.814: 96.1018% ( 6) 00:09:53.923 18.814 - 18.917: 96.4530% ( 20) 00:09:53.923 18.917 - 19.020: 96.7340% ( 16) 00:09:53.923 19.020 - 19.123: 97.0149% ( 16) 00:09:53.923 19.123 - 19.226: 97.2608% ( 14) 00:09:53.923 19.226 - 19.329: 97.3661% ( 6) 00:09:53.923 19.329 - 19.431: 97.5417% ( 10) 00:09:53.923 19.431 - 19.534: 97.6646% ( 7) 00:09:53.923 19.534 - 19.637: 97.8051% ( 8) 00:09:53.923 19.637 - 19.740: 97.9982% ( 11) 00:09:53.923 19.740 - 19.843: 98.1387% ( 8) 00:09:53.923 19.843 - 19.945: 98.2616% ( 7) 00:09:53.923 19.945 - 20.048: 98.3143% ( 3) 00:09:53.923 20.048 - 20.151: 98.3670% ( 3) 00:09:53.923 20.151 - 20.254: 98.4548% ( 5) 00:09:53.923 20.254 - 20.357: 98.5426% ( 5) 00:09:53.923 20.357 - 20.459: 98.5953% ( 3) 00:09:53.923 20.459 - 20.562: 98.6655% ( 4) 00:09:53.923 20.562 - 20.665: 98.7357% ( 4) 00:09:53.923 20.665 - 20.768: 98.7533% ( 1) 00:09:53.923 20.768 - 20.871: 98.7884% ( 2) 00:09:53.923 20.871 - 20.973: 98.8762% ( 5) 00:09:53.923 20.973 - 21.076: 98.8938% ( 1) 00:09:53.923 21.076 - 21.179: 98.9464% ( 3) 00:09:53.923 21.179 - 21.282: 98.9640% ( 1) 00:09:53.923 21.385 - 21.488: 98.9816% ( 1) 00:09:53.923 21.488 - 21.590: 99.0167% ( 2) 00:09:53.923 21.693 - 21.796: 99.0342% ( 1) 00:09:53.923 21.796 - 21.899: 99.0694% ( 2) 00:09:53.923 21.899 - 22.002: 99.1572% ( 5) 00:09:53.923 22.002 - 22.104: 99.1747% ( 1) 00:09:53.923 22.104 - 22.207: 99.2098% ( 2) 00:09:53.923 22.207 - 22.310: 99.2274% ( 1) 00:09:53.923 22.310 - 22.413: 99.3152% ( 5) 00:09:53.923 22.516 - 22.618: 99.3327% ( 1) 00:09:53.923 22.618 - 22.721: 99.3854% ( 3) 00:09:53.923 22.721 - 22.824: 99.4205% ( 2) 00:09:53.923 22.824 - 22.927: 99.4381% ( 1) 00:09:53.923 22.927 - 23.030: 99.4732% ( 2) 00:09:53.923 23.030 - 23.133: 99.4908% ( 1) 00:09:53.923 23.133 - 23.235: 99.5083% ( 1) 00:09:53.923 23.235 - 23.338: 99.5259% ( 1) 00:09:53.923 23.338 - 23.441: 99.5435% ( 1) 00:09:53.923 23.647 - 23.749: 99.5610% ( 1) 00:09:53.923 23.852 - 23.955: 99.5786% ( 1) 00:09:53.923 23.955 - 24.058: 99.5961% ( 1) 00:09:53.923 24.058 - 24.161: 99.6137% ( 1) 00:09:53.923 24.263 - 24.366: 99.6313% ( 1) 00:09:53.923 24.366 - 24.469: 99.6488% ( 1) 00:09:53.923 24.778 - 24.880: 99.6664% ( 1) 00:09:53.923 25.394 - 25.497: 99.6839% ( 1) 00:09:53.923 26.011 - 26.114: 99.7015% ( 1) 00:09:53.923 26.217 - 26.320: 99.7191% ( 1) 00:09:53.923 26.320 - 26.525: 99.7366% ( 1) 00:09:53.923 27.142 - 27.348: 99.7542% ( 1) 00:09:53.923 27.553 - 27.759: 99.7717% ( 1) 00:09:53.923 27.965 - 28.170: 99.8068% ( 2) 00:09:53.923 28.170 - 28.376: 99.8244% ( 1) 00:09:53.923 28.993 - 29.198: 99.8420% ( 1) 00:09:53.923 29.404 - 29.610: 99.8771% ( 2) 00:09:53.923 30.227 - 30.432: 99.8946% ( 1) 00:09:53.923 34.133 - 34.339: 99.9122% ( 1) 00:09:53.923 37.423 - 37.629: 99.9298% ( 1) 00:09:53.923 39.480 - 39.685: 99.9473% ( 1) 00:09:53.923 47.499 - 47.704: 99.9649% ( 1) 00:09:53.923 57.163 - 57.574: 99.9824% ( 1) 00:09:53.924 101.578 - 101.989: 100.0000% ( 1) 00:09:53.924 00:09:53.924 Complete histogram 00:09:53.924 ================== 00:09:53.924 Range in us Cumulative Count 00:09:53.924 7.762 - 7.814: 0.0878% ( 5) 00:09:53.924 7.814 - 7.865: 0.3336% ( 14) 00:09:53.924 7.865 - 7.916: 0.9833% ( 37) 00:09:53.924 7.916 - 7.968: 1.9315% ( 54) 00:09:53.924 7.968 - 8.019: 8.0421% ( 348) 00:09:53.924 8.019 - 8.071: 26.3740% ( 1044) 00:09:53.924 8.071 - 8.122: 45.2151% ( 1073) 00:09:53.924 8.122 - 8.173: 54.1176% ( 507) 00:09:53.924 8.173 - 8.225: 57.8578% ( 213) 00:09:53.924 8.225 - 8.276: 62.1071% ( 242) 00:09:53.924 8.276 - 8.328: 65.6014% ( 199) 00:09:53.924 8.328 - 8.379: 67.3398% ( 99) 00:09:53.924 8.379 - 8.431: 68.4109% ( 61) 00:09:53.924 8.431 - 8.482: 70.0263% ( 92) 00:09:53.924 8.482 - 8.533: 72.3442% ( 132) 00:09:53.924 8.533 - 8.585: 74.8025% ( 140) 00:09:53.924 8.585 - 8.636: 79.3854% ( 261) 00:09:53.924 8.636 - 8.688: 86.7428% ( 419) 00:09:53.924 8.688 - 8.739: 90.6760% ( 224) 00:09:53.924 8.739 - 8.790: 92.6602% ( 113) 00:09:53.924 8.790 - 8.842: 94.3810% ( 98) 00:09:53.924 8.842 - 8.893: 95.7858% ( 80) 00:09:53.924 8.893 - 8.945: 96.5233% ( 42) 00:09:53.924 8.945 - 8.996: 97.0149% ( 28) 00:09:53.924 8.996 - 9.047: 97.4012% ( 22) 00:09:53.924 9.047 - 9.099: 97.5593% ( 9) 00:09:53.924 9.099 - 9.150: 97.6646% ( 6) 00:09:53.924 9.150 - 9.202: 97.6822% ( 1) 00:09:53.924 9.202 - 9.253: 97.7349% ( 3) 00:09:53.924 9.253 - 9.304: 97.7700% ( 2) 00:09:53.924 9.356 - 9.407: 97.7875% ( 1) 00:09:53.924 9.459 - 9.510: 97.8051% ( 1) 00:09:53.924 9.561 - 9.613: 97.8227% ( 1) 00:09:53.924 9.767 - 9.818: 97.8402% ( 1) 00:09:53.924 10.384 - 10.435: 97.8578% ( 1) 00:09:53.924 10.435 - 10.487: 97.8753% ( 1) 00:09:53.924 10.487 - 10.538: 97.8929% ( 1) 00:09:53.924 10.795 - 10.847: 97.9280% ( 2) 00:09:53.924 11.618 - 11.669: 97.9456% ( 1) 00:09:53.924 12.697 - 12.749: 97.9631% ( 1) 00:09:53.924 12.749 - 12.800: 97.9807% ( 1) 00:09:53.924 13.006 - 13.057: 97.9982% ( 1) 00:09:53.924 13.057 - 13.108: 98.0158% ( 1) 00:09:53.924 13.160 - 13.263: 98.0509% ( 2) 00:09:53.924 13.365 - 13.468: 98.1212% ( 4) 00:09:53.924 13.468 - 13.571: 98.1914% ( 4) 00:09:53.924 13.571 - 13.674: 98.2441% ( 3) 00:09:53.924 13.674 - 13.777: 98.3319% ( 5) 00:09:53.924 13.777 - 13.880: 98.4197% ( 5) 00:09:53.924 13.880 - 13.982: 98.5601% ( 8) 00:09:53.924 13.982 - 14.085: 98.5777% ( 1) 00:09:53.924 14.085 - 14.188: 98.7182% ( 8) 00:09:53.924 14.188 - 14.291: 98.7709% ( 3) 00:09:53.924 14.291 - 14.394: 98.8411% ( 4) 00:09:53.924 14.394 - 14.496: 98.8938% ( 3) 00:09:53.924 14.496 - 14.599: 98.9464% ( 3) 00:09:53.924 14.599 - 14.702: 98.9991% ( 3) 00:09:53.924 14.702 - 14.805: 99.1045% ( 6) 00:09:53.924 14.805 - 14.908: 99.1747% ( 4) 00:09:53.924 14.908 - 15.010: 99.2098% ( 2) 00:09:53.924 15.113 - 15.216: 99.2274% ( 1) 00:09:53.924 15.319 - 15.422: 99.2450% ( 1) 00:09:53.924 15.422 - 15.524: 99.2976% ( 3) 00:09:53.924 15.627 - 15.730: 99.3152% ( 1) 00:09:53.924 15.833 - 15.936: 99.3327% ( 1) 00:09:53.924 15.936 - 16.039: 99.3503% ( 1) 00:09:53.924 16.141 - 16.244: 99.3854% ( 2) 00:09:53.924 16.347 - 16.450: 99.4030% ( 1) 00:09:53.924 16.450 - 16.553: 99.4205% ( 1) 00:09:53.924 16.553 - 16.655: 99.4557% ( 2) 00:09:53.924 16.655 - 16.758: 99.4732% ( 1) 00:09:53.924 16.758 - 16.861: 99.4908% ( 1) 00:09:53.924 16.861 - 16.964: 99.5083% ( 1) 00:09:53.924 17.684 - 17.786: 99.5259% ( 1) 00:09:53.924 18.300 - 18.403: 99.5610% ( 2) 00:09:53.924 18.609 - 18.712: 99.5786% ( 1) 00:09:53.924 18.712 - 18.814: 99.5961% ( 1) 00:09:53.924 18.814 - 18.917: 99.6137% ( 1) 00:09:53.924 18.917 - 19.020: 99.6313% ( 1) 00:09:53.924 19.020 - 19.123: 99.6488% ( 1) 00:09:53.924 19.637 - 19.740: 99.6664% ( 1) 00:09:53.924 19.945 - 20.048: 99.6839% ( 1) 00:09:53.924 20.459 - 20.562: 99.7015% ( 1) 00:09:53.924 21.385 - 21.488: 99.7191% ( 1) 00:09:53.924 22.104 - 22.207: 99.7366% ( 1) 00:09:53.924 24.469 - 24.572: 99.7542% ( 1) 00:09:53.924 24.675 - 24.778: 99.7717% ( 1) 00:09:53.924 24.880 - 24.983: 99.7893% ( 1) 00:09:53.924 25.806 - 25.908: 99.8068% ( 1) 00:09:53.924 26.320 - 26.525: 99.8244% ( 1) 00:09:53.924 26.525 - 26.731: 99.8420% ( 1) 00:09:53.924 27.142 - 27.348: 99.8595% ( 1) 00:09:53.924 27.965 - 28.170: 99.8771% ( 1) 00:09:53.924 31.460 - 31.666: 99.8946% ( 1) 00:09:53.924 31.666 - 31.871: 99.9122% ( 1) 00:09:53.924 40.302 - 40.508: 99.9298% ( 1) 00:09:53.924 44.826 - 45.031: 99.9473% ( 1) 00:09:53.924 45.648 - 45.854: 99.9649% ( 1) 00:09:53.924 53.051 - 53.462: 99.9824% ( 1) 00:09:53.924 100.344 - 100.755: 100.0000% ( 1) 00:09:53.924 00:09:53.924 ************************************ 00:09:53.924 END TEST nvme_overhead 00:09:53.924 ************************************ 00:09:53.924 00:09:53.924 real 0m1.261s 00:09:53.924 user 0m1.075s 00:09:53.924 sys 0m0.133s 00:09:53.924 15:45:28 nvme.nvme_overhead -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:53.924 15:45:28 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:53.924 15:45:28 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:53.924 15:45:28 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:09:53.924 15:45:28 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:53.924 15:45:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:53.924 ************************************ 00:09:53.924 START TEST nvme_arbitration 00:09:53.924 ************************************ 00:09:53.924 15:45:28 nvme.nvme_arbitration -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:57.207 Initializing NVMe Controllers 00:09:57.207 Attached to 0000:00:10.0 00:09:57.207 Attached to 0000:00:11.0 00:09:57.207 Attached to 0000:00:13.0 00:09:57.207 Attached to 0000:00:12.0 00:09:57.207 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:57.207 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:57.207 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:57.207 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:57.207 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:57.207 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:57.207 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:57.207 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:57.207 Initialization complete. Launching workers. 00:09:57.207 Starting thread on core 1 with urgent priority queue 00:09:57.207 Starting thread on core 2 with urgent priority queue 00:09:57.207 Starting thread on core 3 with urgent priority queue 00:09:57.207 Starting thread on core 0 with urgent priority queue 00:09:57.207 QEMU NVMe Ctrl (12340 ) core 0: 4224.00 IO/s 23.67 secs/100000 ios 00:09:57.207 QEMU NVMe Ctrl (12342 ) core 0: 4224.00 IO/s 23.67 secs/100000 ios 00:09:57.207 QEMU NVMe Ctrl (12341 ) core 1: 3882.67 IO/s 25.76 secs/100000 ios 00:09:57.207 QEMU NVMe Ctrl (12342 ) core 1: 3882.67 IO/s 25.76 secs/100000 ios 00:09:57.207 QEMU NVMe Ctrl (12343 ) core 2: 3669.33 IO/s 27.25 secs/100000 ios 00:09:57.207 QEMU NVMe Ctrl (12342 ) core 3: 3690.67 IO/s 27.10 secs/100000 ios 00:09:57.207 ======================================================== 00:09:57.207 00:09:57.207 ************************************ 00:09:57.207 END TEST nvme_arbitration 00:09:57.207 ************************************ 00:09:57.207 00:09:57.207 real 0m3.289s 00:09:57.207 user 0m9.081s 00:09:57.207 sys 0m0.135s 00:09:57.207 15:45:31 nvme.nvme_arbitration -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:57.207 15:45:31 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:57.207 15:45:31 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:57.207 15:45:31 nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:57.207 15:45:31 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:57.207 15:45:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.207 ************************************ 00:09:57.207 START TEST nvme_single_aen 00:09:57.207 ************************************ 00:09:57.207 15:45:31 nvme.nvme_single_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:57.465 Asynchronous Event Request test 00:09:57.465 Attached to 0000:00:10.0 00:09:57.465 Attached to 0000:00:11.0 00:09:57.465 Attached to 0000:00:13.0 00:09:57.465 Attached to 0000:00:12.0 00:09:57.465 Reset controller to setup AER completions for this process 00:09:57.465 Registering asynchronous event callbacks... 00:09:57.465 Getting orig temperature thresholds of all controllers 00:09:57.465 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.465 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.465 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.465 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:57.465 Setting all controllers temperature threshold low to trigger AER 00:09:57.465 Waiting for all controllers temperature threshold to be set lower 00:09:57.465 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.465 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:57.465 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.465 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:57.465 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.465 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:57.465 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:57.465 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:57.465 Waiting for all controllers to trigger AER and reset threshold 00:09:57.465 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.465 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.465 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.465 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.465 Cleaning up... 00:09:57.465 ************************************ 00:09:57.465 END TEST nvme_single_aen 00:09:57.465 ************************************ 00:09:57.465 00:09:57.465 real 0m0.267s 00:09:57.465 user 0m0.091s 00:09:57.465 sys 0m0.125s 00:09:57.465 15:45:32 nvme.nvme_single_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:57.465 15:45:32 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:57.465 15:45:32 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:57.465 15:45:32 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:57.465 15:45:32 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:57.465 15:45:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.465 ************************************ 00:09:57.465 START TEST nvme_doorbell_aers 00:09:57.465 ************************************ 00:09:57.465 15:45:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1121 -- # nvme_doorbell_aers 00:09:57.465 15:45:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:57.465 15:45:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:57.465 15:45:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:57.465 15:45:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:57.465 15:45:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:57.465 15:45:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # local bdfs 00:09:57.465 15:45:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:57.465 15:45:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:57.465 15:45:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:09:57.723 15:45:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:09:57.723 15:45:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:57.723 15:45:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:57.723 15:45:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:57.980 [2024-07-20 15:45:32.632389] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:07.944 Executing: test_write_invalid_db 00:10:07.944 Waiting for AER completion... 00:10:07.944 Failure: test_write_invalid_db 00:10:07.944 00:10:07.944 Executing: test_invalid_db_write_overflow_sq 00:10:07.944 Waiting for AER completion... 00:10:07.944 Failure: test_invalid_db_write_overflow_sq 00:10:07.944 00:10:07.944 Executing: test_invalid_db_write_overflow_cq 00:10:07.944 Waiting for AER completion... 00:10:07.944 Failure: test_invalid_db_write_overflow_cq 00:10:07.944 00:10:07.944 15:45:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:07.944 15:45:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:07.944 [2024-07-20 15:45:42.669094] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:17.976 Executing: test_write_invalid_db 00:10:17.976 Waiting for AER completion... 00:10:17.976 Failure: test_write_invalid_db 00:10:17.976 00:10:17.976 Executing: test_invalid_db_write_overflow_sq 00:10:17.976 Waiting for AER completion... 00:10:17.976 Failure: test_invalid_db_write_overflow_sq 00:10:17.976 00:10:17.976 Executing: test_invalid_db_write_overflow_cq 00:10:17.976 Waiting for AER completion... 00:10:17.976 Failure: test_invalid_db_write_overflow_cq 00:10:17.976 00:10:17.976 15:45:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:17.976 15:45:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:17.976 [2024-07-20 15:45:52.710026] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:27.976 Executing: test_write_invalid_db 00:10:27.976 Waiting for AER completion... 00:10:27.976 Failure: test_write_invalid_db 00:10:27.976 00:10:27.976 Executing: test_invalid_db_write_overflow_sq 00:10:27.976 Waiting for AER completion... 00:10:27.976 Failure: test_invalid_db_write_overflow_sq 00:10:27.976 00:10:27.976 Executing: test_invalid_db_write_overflow_cq 00:10:27.976 Waiting for AER completion... 00:10:27.976 Failure: test_invalid_db_write_overflow_cq 00:10:27.976 00:10:27.976 15:46:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:27.976 15:46:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:27.976 [2024-07-20 15:46:02.766681] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:37.939 Executing: test_write_invalid_db 00:10:37.939 Waiting for AER completion... 00:10:37.939 Failure: test_write_invalid_db 00:10:37.939 00:10:37.939 Executing: test_invalid_db_write_overflow_sq 00:10:37.939 Waiting for AER completion... 00:10:37.939 Failure: test_invalid_db_write_overflow_sq 00:10:37.939 00:10:37.939 Executing: test_invalid_db_write_overflow_cq 00:10:37.939 Waiting for AER completion... 00:10:37.939 Failure: test_invalid_db_write_overflow_cq 00:10:37.939 00:10:37.939 ************************************ 00:10:37.939 END TEST nvme_doorbell_aers 00:10:37.939 ************************************ 00:10:37.939 00:10:37.939 real 0m40.296s 00:10:37.939 user 0m29.749s 00:10:37.939 sys 0m10.187s 00:10:37.940 15:46:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:37.940 15:46:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:37.940 15:46:12 nvme -- nvme/nvme.sh@97 -- # uname 00:10:37.940 15:46:12 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:37.940 15:46:12 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:37.940 15:46:12 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:37.940 15:46:12 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:37.940 15:46:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:37.940 ************************************ 00:10:37.940 START TEST nvme_multi_aen 00:10:37.940 ************************************ 00:10:37.940 15:46:12 nvme.nvme_multi_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:38.198 [2024-07-20 15:46:12.821598] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:38.198 [2024-07-20 15:46:12.821692] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:38.198 [2024-07-20 15:46:12.821709] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:38.198 [2024-07-20 15:46:12.823217] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:38.198 [2024-07-20 15:46:12.823253] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:38.198 [2024-07-20 15:46:12.823269] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:38.198 [2024-07-20 15:46:12.824491] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:38.198 [2024-07-20 15:46:12.824537] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:38.198 [2024-07-20 15:46:12.824552] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:38.198 [2024-07-20 15:46:12.825741] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:38.198 [2024-07-20 15:46:12.825778] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:38.198 [2024-07-20 15:46:12.825795] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80454) is not found. Dropping the request. 00:10:38.198 Child process pid: 80976 00:10:38.456 [Child] Asynchronous Event Request test 00:10:38.456 [Child] Attached to 0000:00:10.0 00:10:38.456 [Child] Attached to 0000:00:11.0 00:10:38.456 [Child] Attached to 0000:00:13.0 00:10:38.456 [Child] Attached to 0000:00:12.0 00:10:38.456 [Child] Registering asynchronous event callbacks... 00:10:38.456 [Child] Getting orig temperature thresholds of all controllers 00:10:38.456 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.456 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.456 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.456 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.456 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:38.456 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.456 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.456 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.456 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.456 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.456 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.456 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.456 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.456 [Child] Cleaning up... 00:10:38.456 Asynchronous Event Request test 00:10:38.456 Attached to 0000:00:10.0 00:10:38.456 Attached to 0000:00:11.0 00:10:38.456 Attached to 0000:00:13.0 00:10:38.456 Attached to 0000:00:12.0 00:10:38.456 Reset controller to setup AER completions for this process 00:10:38.456 Registering asynchronous event callbacks... 00:10:38.456 Getting orig temperature thresholds of all controllers 00:10:38.456 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.456 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.456 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.456 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:38.456 Setting all controllers temperature threshold low to trigger AER 00:10:38.456 Waiting for all controllers temperature threshold to be set lower 00:10:38.456 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.456 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:38.456 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.456 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:38.456 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.456 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:38.456 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:38.456 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:38.456 Waiting for all controllers to trigger AER and reset threshold 00:10:38.456 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.456 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.456 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.456 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:38.456 Cleaning up... 00:10:38.456 ************************************ 00:10:38.456 END TEST nvme_multi_aen 00:10:38.456 ************************************ 00:10:38.456 00:10:38.456 real 0m0.492s 00:10:38.456 user 0m0.164s 00:10:38.456 sys 0m0.230s 00:10:38.456 15:46:13 nvme.nvme_multi_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:38.456 15:46:13 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:38.456 15:46:13 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:38.456 15:46:13 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:38.456 15:46:13 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:38.456 15:46:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:38.456 ************************************ 00:10:38.456 START TEST nvme_startup 00:10:38.456 ************************************ 00:10:38.456 15:46:13 nvme.nvme_startup -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:38.714 Initializing NVMe Controllers 00:10:38.714 Attached to 0000:00:10.0 00:10:38.714 Attached to 0000:00:11.0 00:10:38.714 Attached to 0000:00:13.0 00:10:38.714 Attached to 0000:00:12.0 00:10:38.714 Initialization complete. 00:10:38.714 Time used:137275.828 (us). 00:10:38.714 ************************************ 00:10:38.714 END TEST nvme_startup 00:10:38.714 ************************************ 00:10:38.714 00:10:38.714 real 0m0.216s 00:10:38.714 user 0m0.089s 00:10:38.714 sys 0m0.096s 00:10:38.714 15:46:13 nvme.nvme_startup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:38.714 15:46:13 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:38.714 15:46:13 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:38.714 15:46:13 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:38.714 15:46:13 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:38.714 15:46:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:38.714 ************************************ 00:10:38.714 START TEST nvme_multi_secondary 00:10:38.714 ************************************ 00:10:38.714 15:46:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@1121 -- # nvme_multi_secondary 00:10:38.714 15:46:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=81021 00:10:38.714 15:46:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:38.714 15:46:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=81022 00:10:38.714 15:46:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:38.714 15:46:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:42.901 Initializing NVMe Controllers 00:10:42.901 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:42.901 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:42.901 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:42.901 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:42.901 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:42.901 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:42.901 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:42.901 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:42.901 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:42.901 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:42.901 Initialization complete. Launching workers. 00:10:42.901 ======================================================== 00:10:42.901 Latency(us) 00:10:42.901 Device Information : IOPS MiB/s Average min max 00:10:42.901 PCIE (0000:00:10.0) NSID 1 from core 2: 3455.63 13.50 4628.51 1314.56 11941.00 00:10:42.901 PCIE (0000:00:11.0) NSID 1 from core 2: 3455.63 13.50 4630.30 1344.07 11883.38 00:10:42.901 PCIE (0000:00:13.0) NSID 1 from core 2: 3455.63 13.50 4630.00 1301.84 11441.47 00:10:42.901 PCIE (0000:00:12.0) NSID 1 from core 2: 3455.63 13.50 4636.67 1313.47 10952.69 00:10:42.901 PCIE (0000:00:12.0) NSID 2 from core 2: 3455.63 13.50 4637.06 1334.71 10468.54 00:10:42.901 PCIE (0000:00:12.0) NSID 3 from core 2: 3455.63 13.50 4637.26 1346.01 10665.10 00:10:42.901 ======================================================== 00:10:42.901 Total : 20733.78 80.99 4633.30 1301.84 11941.00 00:10:42.901 00:10:42.901 15:46:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 81021 00:10:42.901 Initializing NVMe Controllers 00:10:42.901 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:42.901 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:42.901 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:42.901 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:42.901 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:42.901 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:42.902 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:42.902 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:42.902 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:42.902 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:42.902 Initialization complete. Launching workers. 00:10:42.902 ======================================================== 00:10:42.902 Latency(us) 00:10:42.902 Device Information : IOPS MiB/s Average min max 00:10:42.902 PCIE (0000:00:10.0) NSID 1 from core 1: 4951.37 19.34 3228.76 1677.42 7668.28 00:10:42.902 PCIE (0000:00:11.0) NSID 1 from core 1: 4951.37 19.34 3230.68 1708.69 7616.65 00:10:42.902 PCIE (0000:00:13.0) NSID 1 from core 1: 4951.37 19.34 3230.70 1794.54 7789.78 00:10:42.902 PCIE (0000:00:12.0) NSID 1 from core 1: 4951.37 19.34 3230.75 1758.98 7876.84 00:10:42.902 PCIE (0000:00:12.0) NSID 2 from core 1: 4951.37 19.34 3230.79 1633.41 7629.97 00:10:42.902 PCIE (0000:00:12.0) NSID 3 from core 1: 4951.37 19.34 3230.85 1474.87 7728.68 00:10:42.902 ======================================================== 00:10:42.902 Total : 29708.22 116.05 3230.42 1474.87 7876.84 00:10:42.902 00:10:44.274 Initializing NVMe Controllers 00:10:44.274 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:44.274 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:44.274 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:44.274 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:44.274 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:44.274 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:44.274 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:44.274 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:44.274 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:44.274 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:44.274 Initialization complete. Launching workers. 00:10:44.274 ======================================================== 00:10:44.274 Latency(us) 00:10:44.274 Device Information : IOPS MiB/s Average min max 00:10:44.274 PCIE (0000:00:10.0) NSID 1 from core 0: 8715.32 34.04 1834.31 903.15 7754.62 00:10:44.274 PCIE (0000:00:11.0) NSID 1 from core 0: 8715.32 34.04 1835.28 921.49 7442.21 00:10:44.274 PCIE (0000:00:13.0) NSID 1 from core 0: 8715.32 34.04 1835.24 744.54 7800.43 00:10:44.274 PCIE (0000:00:12.0) NSID 1 from core 0: 8715.32 34.04 1835.21 624.33 7761.88 00:10:44.274 PCIE (0000:00:12.0) NSID 2 from core 0: 8715.32 34.04 1835.18 493.82 8005.91 00:10:44.274 PCIE (0000:00:12.0) NSID 3 from core 0: 8718.52 34.06 1834.48 380.16 7530.36 00:10:44.274 ======================================================== 00:10:44.274 Total : 52295.14 204.28 1834.95 380.16 8005.91 00:10:44.274 00:10:44.274 15:46:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 81022 00:10:44.274 15:46:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=81099 00:10:44.274 15:46:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:44.274 15:46:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=81100 00:10:44.274 15:46:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:44.274 15:46:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:47.556 Initializing NVMe Controllers 00:10:47.556 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:47.556 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:47.556 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:47.556 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:47.556 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:47.556 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:47.556 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:47.556 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:47.556 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:47.556 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:47.556 Initialization complete. Launching workers. 00:10:47.556 ======================================================== 00:10:47.556 Latency(us) 00:10:47.556 Device Information : IOPS MiB/s Average min max 00:10:47.556 PCIE (0000:00:10.0) NSID 1 from core 0: 5501.57 21.49 2905.89 924.68 5540.52 00:10:47.556 PCIE (0000:00:11.0) NSID 1 from core 0: 5501.57 21.49 2907.67 955.33 5496.49 00:10:47.556 PCIE (0000:00:13.0) NSID 1 from core 0: 5501.57 21.49 2907.86 954.64 5411.41 00:10:47.556 PCIE (0000:00:12.0) NSID 1 from core 0: 5501.57 21.49 2908.07 937.22 5377.97 00:10:47.556 PCIE (0000:00:12.0) NSID 2 from core 0: 5501.57 21.49 2908.02 942.32 5492.54 00:10:47.556 PCIE (0000:00:12.0) NSID 3 from core 0: 5506.90 21.51 2905.15 946.90 5813.25 00:10:47.556 ======================================================== 00:10:47.556 Total : 33014.74 128.96 2907.11 924.68 5813.25 00:10:47.556 00:10:47.556 Initializing NVMe Controllers 00:10:47.556 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:47.556 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:47.556 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:47.556 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:47.556 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:47.556 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:47.556 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:47.556 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:47.556 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:47.556 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:47.556 Initialization complete. Launching workers. 00:10:47.556 ======================================================== 00:10:47.556 Latency(us) 00:10:47.556 Device Information : IOPS MiB/s Average min max 00:10:47.556 PCIE (0000:00:10.0) NSID 1 from core 1: 5241.47 20.47 3049.98 1014.73 6148.38 00:10:47.556 PCIE (0000:00:11.0) NSID 1 from core 1: 5241.47 20.47 3052.03 1041.39 6309.89 00:10:47.556 PCIE (0000:00:13.0) NSID 1 from core 1: 5241.47 20.47 3052.07 1039.92 6160.84 00:10:47.556 PCIE (0000:00:12.0) NSID 1 from core 1: 5241.47 20.47 3052.21 1050.44 5877.19 00:10:47.556 PCIE (0000:00:12.0) NSID 2 from core 1: 5241.47 20.47 3052.39 1034.99 5527.53 00:10:47.556 PCIE (0000:00:12.0) NSID 3 from core 1: 5241.47 20.47 3052.40 1040.22 5912.78 00:10:47.556 ======================================================== 00:10:47.556 Total : 31448.80 122.85 3051.85 1014.73 6309.89 00:10:47.556 00:10:49.454 Initializing NVMe Controllers 00:10:49.454 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:49.454 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:49.454 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:49.454 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:49.454 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:49.454 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:49.454 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:49.454 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:49.454 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:49.454 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:49.454 Initialization complete. Launching workers. 00:10:49.454 ======================================================== 00:10:49.454 Latency(us) 00:10:49.454 Device Information : IOPS MiB/s Average min max 00:10:49.454 PCIE (0000:00:10.0) NSID 1 from core 2: 3431.80 13.41 4660.90 983.58 10817.93 00:10:49.455 PCIE (0000:00:11.0) NSID 1 from core 2: 3431.80 13.41 4661.69 1000.78 10650.26 00:10:49.455 PCIE (0000:00:13.0) NSID 1 from core 2: 3431.80 13.41 4665.58 1002.83 10906.60 00:10:49.455 PCIE (0000:00:12.0) NSID 1 from core 2: 3431.80 13.41 4665.48 1021.10 11005.00 00:10:49.455 PCIE (0000:00:12.0) NSID 2 from core 2: 3431.80 13.41 4665.34 1005.76 11134.87 00:10:49.455 PCIE (0000:00:12.0) NSID 3 from core 2: 3431.80 13.41 4665.23 1002.29 10888.74 00:10:49.455 ======================================================== 00:10:49.455 Total : 20590.80 80.43 4664.04 983.58 11134.87 00:10:49.455 00:10:49.455 ************************************ 00:10:49.455 END TEST nvme_multi_secondary 00:10:49.455 ************************************ 00:10:49.455 15:46:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 81099 00:10:49.455 15:46:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 81100 00:10:49.455 00:10:49.455 real 0m10.780s 00:10:49.455 user 0m18.332s 00:10:49.455 sys 0m0.871s 00:10:49.455 15:46:24 nvme.nvme_multi_secondary -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:49.455 15:46:24 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:49.713 15:46:24 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:49.713 15:46:24 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:49.713 15:46:24 nvme -- common/autotest_common.sh@1085 -- # [[ -e /proc/80046 ]] 00:10:49.713 15:46:24 nvme -- common/autotest_common.sh@1086 -- # kill 80046 00:10:49.713 15:46:24 nvme -- common/autotest_common.sh@1087 -- # wait 80046 00:10:49.713 [2024-07-20 15:46:24.304814] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.304945] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.304995] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.305045] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.306210] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.306301] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.306381] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.306434] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.307595] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.307692] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.307737] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.307793] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.308910] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.309007] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.309052] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.309100] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80975) is not found. Dropping the request. 00:10:49.713 [2024-07-20 15:46:24.426587] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:10:49.713 15:46:24 nvme -- common/autotest_common.sh@1089 -- # rm -f /var/run/spdk_stub0 00:10:49.713 15:46:24 nvme -- common/autotest_common.sh@1093 -- # echo 2 00:10:49.713 15:46:24 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:49.713 15:46:24 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:49.713 15:46:24 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:49.713 15:46:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:49.713 ************************************ 00:10:49.713 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:49.713 ************************************ 00:10:49.713 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:49.971 * Looking for test storage... 00:10:49.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # bdfs=() 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # local bdfs 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=81257 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 81257 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # '[' -z 81257 ']' 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:49.971 15:46:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:50.229 [2024-07-20 15:46:24.779050] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:50.229 [2024-07-20 15:46:24.779182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81257 ] 00:10:50.229 [2024-07-20 15:46:24.947536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.229 [2024-07-20 15:46:24.991047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.229 [2024-07-20 15:46:24.991257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.229 [2024-07-20 15:46:24.991466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.229 [2024-07-20 15:46:24.991298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.796 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:50.796 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # return 0 00:10:50.796 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:50.796 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.796 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:51.054 nvme0n1 00:10:51.054 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.054 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:51.054 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_1fAhX.txt 00:10:51.054 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:51.054 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.054 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:51.054 true 00:10:51.054 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.054 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:51.054 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721490385 00:10:51.054 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=81280 00:10:51.054 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:51.054 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:51.054 15:46:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:53.015 [2024-07-20 15:46:27.659637] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:53.015 [2024-07-20 15:46:27.660061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:53.015 [2024-07-20 15:46:27.660183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:53.015 [2024-07-20 15:46:27.660292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.015 [2024-07-20 15:46:27.662113] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 81280 00:10:53.015 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 81280 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 81280 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_1fAhX.txt 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_1fAhX.txt 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 81257 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # '[' -z 81257 ']' 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # kill -0 81257 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # uname 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:53.015 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81257 00:10:53.274 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:53.274 killing process with pid 81257 00:10:53.274 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:53.274 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81257' 00:10:53.274 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@965 -- # kill 81257 00:10:53.274 15:46:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # wait 81257 00:10:53.534 15:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:53.534 15:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:53.534 00:10:53.534 real 0m3.759s 00:10:53.534 user 0m12.918s 00:10:53.534 sys 0m0.703s 00:10:53.534 15:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:53.534 ************************************ 00:10:53.534 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:53.534 ************************************ 00:10:53.534 15:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:53.534 15:46:28 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:53.534 15:46:28 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:53.534 15:46:28 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:53.534 15:46:28 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:53.534 15:46:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:53.534 ************************************ 00:10:53.534 START TEST nvme_fio 00:10:53.534 ************************************ 00:10:53.534 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1121 -- # nvme_fio_test 00:10:53.534 15:46:28 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:53.534 15:46:28 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:53.534 15:46:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:53.534 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:53.534 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # local bdfs 00:10:53.534 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:53.534 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:53.534 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:53.792 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:53.792 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:53.792 15:46:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:53.792 15:46:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:53.792 15:46:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:53.792 15:46:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:53.792 15:46:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:54.051 15:46:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:54.051 15:46:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:54.310 15:46:28 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:54.310 15:46:28 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:54.310 15:46:28 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:54.310 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:54.310 fio-3.35 00:10:54.310 Starting 1 thread 00:10:58.500 00:10:58.500 test: (groupid=0, jobs=1): err= 0: pid=81413: Sat Jul 20 15:46:32 2024 00:10:58.500 read: IOPS=23.5k, BW=91.9MiB/s (96.3MB/s)(184MiB/2001msec) 00:10:58.500 slat (nsec): min=3780, max=70902, avg=4400.56, stdev=1107.84 00:10:58.500 clat (usec): min=195, max=12266, avg=2717.89, stdev=313.62 00:10:58.500 lat (usec): min=199, max=12337, avg=2722.29, stdev=314.12 00:10:58.500 clat percentiles (usec): 00:10:58.500 | 1.00th=[ 2474], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2606], 00:10:58.500 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:10:58.500 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2835], 95.00th=[ 2900], 00:10:58.500 | 99.00th=[ 3490], 99.50th=[ 4555], 99.90th=[ 6587], 99.95th=[ 9110], 00:10:58.500 | 99.99th=[11863] 00:10:58.500 bw ( KiB/s): min=89936, max=95664, per=99.10%, avg=93226.67, stdev=2957.81, samples=3 00:10:58.500 iops : min=22484, max=23916, avg=23306.67, stdev=739.45, samples=3 00:10:58.500 write: IOPS=23.4k, BW=91.2MiB/s (95.7MB/s)(183MiB/2001msec); 0 zone resets 00:10:58.500 slat (nsec): min=3882, max=45091, avg=4537.30, stdev=1202.03 00:10:58.500 clat (usec): min=172, max=12064, avg=2725.01, stdev=323.51 00:10:58.500 lat (usec): min=176, max=12078, avg=2729.55, stdev=324.05 00:10:58.500 clat percentiles (usec): 00:10:58.500 | 1.00th=[ 2474], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2606], 00:10:58.500 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:10:58.500 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2835], 95.00th=[ 2900], 00:10:58.500 | 99.00th=[ 3589], 99.50th=[ 4752], 99.90th=[ 6849], 99.95th=[ 9503], 00:10:58.500 | 99.99th=[11600] 00:10:58.500 bw ( KiB/s): min=89352, max=97280, per=99.89%, avg=93325.33, stdev=3964.03, samples=3 00:10:58.500 iops : min=22338, max=24320, avg=23331.33, stdev=991.01, samples=3 00:10:58.500 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:58.500 lat (msec) : 2=0.08%, 4=99.20%, 10=0.63%, 20=0.04% 00:10:58.500 cpu : usr=99.30%, sys=0.15%, ctx=6, majf=0, minf=627 00:10:58.500 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:58.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.500 issued rwts: total=47061,46739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.500 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.500 00:10:58.500 Run status group 0 (all jobs): 00:10:58.500 READ: bw=91.9MiB/s (96.3MB/s), 91.9MiB/s-91.9MiB/s (96.3MB/s-96.3MB/s), io=184MiB (193MB), run=2001-2001msec 00:10:58.500 WRITE: bw=91.2MiB/s (95.7MB/s), 91.2MiB/s-91.2MiB/s (95.7MB/s-95.7MB/s), io=183MiB (191MB), run=2001-2001msec 00:10:58.500 ----------------------------------------------------- 00:10:58.500 Suppressions used: 00:10:58.500 count bytes template 00:10:58.500 1 32 /usr/src/fio/parse.c 00:10:58.500 1 8 libtcmalloc_minimal.so 00:10:58.500 ----------------------------------------------------- 00:10:58.500 00:10:58.500 15:46:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:58.500 15:46:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:58.500 15:46:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:58.500 15:46:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:58.500 15:46:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:58.500 15:46:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:58.759 15:46:33 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:58.759 15:46:33 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:58.759 15:46:33 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:59.018 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:59.018 fio-3.35 00:10:59.018 Starting 1 thread 00:11:03.207 00:11:03.207 test: (groupid=0, jobs=1): err= 0: pid=81469: Sat Jul 20 15:46:37 2024 00:11:03.207 read: IOPS=23.3k, BW=91.1MiB/s (95.5MB/s)(182MiB/2001msec) 00:11:03.207 slat (nsec): min=3830, max=59182, avg=4472.64, stdev=1087.93 00:11:03.207 clat (usec): min=194, max=12117, avg=2740.85, stdev=310.69 00:11:03.207 lat (usec): min=198, max=12173, avg=2745.33, stdev=311.14 00:11:03.207 clat percentiles (usec): 00:11:03.207 | 1.00th=[ 2507], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2638], 00:11:03.207 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:11:03.207 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2835], 95.00th=[ 2868], 00:11:03.207 | 99.00th=[ 3064], 99.50th=[ 4752], 99.90th=[ 6521], 99.95th=[ 9241], 00:11:03.207 | 99.99th=[11731] 00:11:03.207 bw ( KiB/s): min=91528, max=94568, per=99.43%, avg=92778.67, stdev=1589.98, samples=3 00:11:03.207 iops : min=22882, max=23642, avg=23195.33, stdev=397.16, samples=3 00:11:03.207 write: IOPS=23.2k, BW=90.5MiB/s (94.9MB/s)(181MiB/2001msec); 0 zone resets 00:11:03.207 slat (nsec): min=3925, max=54596, avg=4597.01, stdev=1097.82 00:11:03.207 clat (usec): min=225, max=11940, avg=2746.71, stdev=321.39 00:11:03.207 lat (usec): min=229, max=11954, avg=2751.31, stdev=321.85 00:11:03.207 clat percentiles (usec): 00:11:03.207 | 1.00th=[ 2507], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2638], 00:11:03.207 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:11:03.207 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2835], 95.00th=[ 2868], 00:11:03.207 | 99.00th=[ 3163], 99.50th=[ 4948], 99.90th=[ 6849], 99.95th=[ 9503], 00:11:03.207 | 99.99th=[11469] 00:11:03.207 bw ( KiB/s): min=91256, max=93816, per=100.00%, avg=92861.33, stdev=1398.54, samples=3 00:11:03.207 iops : min=22814, max=23454, avg=23215.33, stdev=349.64, samples=3 00:11:03.207 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:03.207 lat (msec) : 2=0.05%, 4=99.26%, 10=0.61%, 20=0.04% 00:11:03.207 cpu : usr=99.40%, sys=0.10%, ctx=3, majf=0, minf=626 00:11:03.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:03.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.207 issued rwts: total=46677,46359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.207 00:11:03.207 Run status group 0 (all jobs): 00:11:03.207 READ: bw=91.1MiB/s (95.5MB/s), 91.1MiB/s-91.1MiB/s (95.5MB/s-95.5MB/s), io=182MiB (191MB), run=2001-2001msec 00:11:03.207 WRITE: bw=90.5MiB/s (94.9MB/s), 90.5MiB/s-90.5MiB/s (94.9MB/s-94.9MB/s), io=181MiB (190MB), run=2001-2001msec 00:11:03.207 ----------------------------------------------------- 00:11:03.207 Suppressions used: 00:11:03.207 count bytes template 00:11:03.207 1 32 /usr/src/fio/parse.c 00:11:03.207 1 8 libtcmalloc_minimal.so 00:11:03.207 ----------------------------------------------------- 00:11:03.207 00:11:03.207 15:46:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:03.207 15:46:37 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:03.207 15:46:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:03.207 15:46:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:03.207 15:46:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:03.207 15:46:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:03.467 15:46:38 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:03.467 15:46:38 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:03.467 15:46:38 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:03.467 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:03.467 fio-3.35 00:11:03.467 Starting 1 thread 00:11:07.651 00:11:07.651 test: (groupid=0, jobs=1): err= 0: pid=81531: Sat Jul 20 15:46:42 2024 00:11:07.651 read: IOPS=23.4k, BW=91.3MiB/s (95.7MB/s)(183MiB/2001msec) 00:11:07.651 slat (nsec): min=3789, max=53836, avg=4460.00, stdev=1076.11 00:11:07.651 clat (usec): min=206, max=13219, avg=2735.42, stdev=439.89 00:11:07.651 lat (usec): min=210, max=13273, avg=2739.88, stdev=440.50 00:11:07.651 clat percentiles (usec): 00:11:07.651 | 1.00th=[ 2474], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2606], 00:11:07.651 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:11:07.651 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2835], 95.00th=[ 2868], 00:11:07.651 | 99.00th=[ 4752], 99.50th=[ 6325], 99.90th=[ 8029], 99.95th=[ 9896], 00:11:07.651 | 99.99th=[12780] 00:11:07.651 bw ( KiB/s): min=89848, max=94952, per=98.63%, avg=92189.33, stdev=2577.95, samples=3 00:11:07.651 iops : min=22462, max=23738, avg=23047.33, stdev=644.49, samples=3 00:11:07.651 write: IOPS=23.2k, BW=90.7MiB/s (95.1MB/s)(182MiB/2001msec); 0 zone resets 00:11:07.651 slat (nsec): min=3896, max=42356, avg=4583.97, stdev=1131.61 00:11:07.651 clat (usec): min=179, max=13021, avg=2741.34, stdev=454.14 00:11:07.651 lat (usec): min=183, max=13035, avg=2745.92, stdev=454.78 00:11:07.651 clat percentiles (usec): 00:11:07.651 | 1.00th=[ 2474], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2606], 00:11:07.651 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:11:07.651 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2835], 95.00th=[ 2868], 00:11:07.651 | 99.00th=[ 4883], 99.50th=[ 6521], 99.90th=[ 8160], 99.95th=[10290], 00:11:07.651 | 99.99th=[12518] 00:11:07.651 bw ( KiB/s): min=89272, max=96456, per=99.36%, avg=92288.00, stdev=3727.97, samples=3 00:11:07.651 iops : min=22318, max=24114, avg=23072.00, stdev=931.99, samples=3 00:11:07.651 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:07.651 lat (msec) : 2=0.05%, 4=98.63%, 10=1.22%, 20=0.05% 00:11:07.651 cpu : usr=99.30%, sys=0.20%, ctx=3, majf=0, minf=627 00:11:07.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:07.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.651 issued rwts: total=46756,46465,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.651 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.651 00:11:07.651 Run status group 0 (all jobs): 00:11:07.651 READ: bw=91.3MiB/s (95.7MB/s), 91.3MiB/s-91.3MiB/s (95.7MB/s-95.7MB/s), io=183MiB (192MB), run=2001-2001msec 00:11:07.651 WRITE: bw=90.7MiB/s (95.1MB/s), 90.7MiB/s-90.7MiB/s (95.1MB/s-95.1MB/s), io=182MiB (190MB), run=2001-2001msec 00:11:07.909 ----------------------------------------------------- 00:11:07.909 Suppressions used: 00:11:07.909 count bytes template 00:11:07.909 1 32 /usr/src/fio/parse.c 00:11:07.909 1 8 libtcmalloc_minimal.so 00:11:07.909 ----------------------------------------------------- 00:11:07.909 00:11:07.909 15:46:42 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:07.909 15:46:42 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:07.909 15:46:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:07.909 15:46:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:08.167 15:46:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:08.167 15:46:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:08.431 15:46:42 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:08.431 15:46:42 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:08.431 15:46:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:08.431 15:46:42 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:11:08.431 15:46:42 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:08.431 15:46:42 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:11:08.431 15:46:42 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:08.431 15:46:42 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:11:08.431 15:46:42 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:11:08.431 15:46:42 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:11:08.431 15:46:42 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:08.431 15:46:42 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:11:08.431 15:46:42 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:11:08.431 15:46:43 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:08.431 15:46:43 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:08.431 15:46:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:11:08.431 15:46:43 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:08.431 15:46:43 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:08.431 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:08.431 fio-3.35 00:11:08.431 Starting 1 thread 00:11:12.614 00:11:12.614 test: (groupid=0, jobs=1): err= 0: pid=81591: Sat Jul 20 15:46:47 2024 00:11:12.614 read: IOPS=24.3k, BW=94.8MiB/s (99.5MB/s)(190MiB/2001msec) 00:11:12.614 slat (nsec): min=3980, max=59400, avg=4395.83, stdev=1039.91 00:11:12.614 clat (usec): min=230, max=12042, avg=2632.11, stdev=302.54 00:11:12.614 lat (usec): min=235, max=12095, avg=2636.50, stdev=302.97 00:11:12.614 clat percentiles (usec): 00:11:12.614 | 1.00th=[ 2409], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2540], 00:11:12.614 | 30.00th=[ 2573], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2638], 00:11:12.614 | 70.00th=[ 2638], 80.00th=[ 2671], 90.00th=[ 2737], 95.00th=[ 2769], 00:11:12.614 | 99.00th=[ 3130], 99.50th=[ 4228], 99.90th=[ 6390], 99.95th=[ 9110], 00:11:12.614 | 99.99th=[11731] 00:11:12.614 bw ( KiB/s): min=94096, max=98160, per=99.12%, avg=96266.67, stdev=2046.14, samples=3 00:11:12.614 iops : min=23524, max=24542, avg=24067.33, stdev=512.46, samples=3 00:11:12.614 write: IOPS=24.1k, BW=94.3MiB/s (98.8MB/s)(189MiB/2001msec); 0 zone resets 00:11:12.614 slat (nsec): min=4060, max=44667, avg=4525.63, stdev=928.02 00:11:12.614 clat (usec): min=205, max=11810, avg=2638.79, stdev=312.80 00:11:12.614 lat (usec): min=210, max=11824, avg=2643.32, stdev=313.23 00:11:12.614 clat percentiles (usec): 00:11:12.614 | 1.00th=[ 2409], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2540], 00:11:12.614 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2606], 60.00th=[ 2638], 00:11:12.614 | 70.00th=[ 2671], 80.00th=[ 2671], 90.00th=[ 2737], 95.00th=[ 2769], 00:11:12.614 | 99.00th=[ 3163], 99.50th=[ 4490], 99.90th=[ 6718], 99.95th=[ 9372], 00:11:12.614 | 99.99th=[11469] 00:11:12.614 bw ( KiB/s): min=94000, max=97704, per=99.87%, avg=96397.33, stdev=2078.96, samples=3 00:11:12.614 iops : min=23500, max=24426, avg=24099.33, stdev=519.74, samples=3 00:11:12.614 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:12.614 lat (msec) : 2=0.12%, 4=99.19%, 10=0.61%, 20=0.04% 00:11:12.614 cpu : usr=99.20%, sys=0.25%, ctx=15, majf=0, minf=623 00:11:12.614 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:12.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.614 issued rwts: total=48584,48283,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.614 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.614 00:11:12.614 Run status group 0 (all jobs): 00:11:12.614 READ: bw=94.8MiB/s (99.5MB/s), 94.8MiB/s-94.8MiB/s (99.5MB/s-99.5MB/s), io=190MiB (199MB), run=2001-2001msec 00:11:12.614 WRITE: bw=94.3MiB/s (98.8MB/s), 94.3MiB/s-94.3MiB/s (98.8MB/s-98.8MB/s), io=189MiB (198MB), run=2001-2001msec 00:11:12.872 ----------------------------------------------------- 00:11:12.872 Suppressions used: 00:11:12.872 count bytes template 00:11:12.872 1 32 /usr/src/fio/parse.c 00:11:12.872 1 8 libtcmalloc_minimal.so 00:11:12.872 ----------------------------------------------------- 00:11:12.872 00:11:12.872 15:46:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:12.872 15:46:47 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:12.872 00:11:12.872 real 0m19.144s 00:11:12.872 user 0m14.810s 00:11:12.872 sys 0m4.941s 00:11:12.872 ************************************ 00:11:12.872 END TEST nvme_fio 00:11:12.872 ************************************ 00:11:12.872 15:46:47 nvme.nvme_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:12.872 15:46:47 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:12.872 ************************************ 00:11:12.872 END TEST nvme 00:11:12.872 ************************************ 00:11:12.872 00:11:12.872 real 1m30.173s 00:11:12.872 user 3m30.431s 00:11:12.872 sys 0m22.315s 00:11:12.872 15:46:47 nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:12.872 15:46:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:12.872 15:46:47 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:11:12.872 15:46:47 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:12.872 15:46:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:12.872 15:46:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:12.872 15:46:47 -- common/autotest_common.sh@10 -- # set +x 00:11:12.872 ************************************ 00:11:12.872 START TEST nvme_scc 00:11:12.872 ************************************ 00:11:12.872 15:46:47 nvme_scc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:12.872 * Looking for test storage... 00:11:13.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:13.130 15:46:47 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:13.130 15:46:47 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:13.130 15:46:47 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:13.130 15:46:47 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:13.130 15:46:47 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:13.130 15:46:47 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.130 15:46:47 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.130 15:46:47 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.130 15:46:47 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.130 15:46:47 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.130 15:46:47 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.130 15:46:47 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:13.130 15:46:47 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.130 15:46:47 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:13.130 15:46:47 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:13.130 15:46:47 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:13.130 15:46:47 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:13.130 15:46:47 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:13.130 15:46:47 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:13.130 15:46:47 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:13.130 15:46:47 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:13.130 15:46:47 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:13.130 15:46:47 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:13.130 15:46:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:13.130 15:46:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:13.130 15:46:47 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:13.130 15:46:47 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:13.387 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:13.645 Waiting for block devices as requested 00:11:13.902 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:13.902 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:13.902 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:14.160 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:19.427 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:19.427 15:46:53 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:19.427 15:46:53 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:19.427 15:46:53 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:19.427 15:46:53 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:19.427 15:46:53 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.427 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:19.428 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.429 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:19.430 15:46:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.430 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.431 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:19.432 15:46:54 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:19.432 15:46:54 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:19.432 15:46:54 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:19.432 15:46:54 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:19.432 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:19.433 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.434 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:19.435 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:19.436 15:46:54 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:19.436 15:46:54 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:19.436 15:46:54 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:19.436 15:46:54 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.436 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:19.702 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.703 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.704 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:19.705 15:46:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.706 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:19.707 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:19.708 15:46:54 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:19.708 15:46:54 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:19.708 15:46:54 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:19.708 15:46:54 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.708 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.709 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.710 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:19.711 15:46:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:19.711 15:46:54 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:11:19.712 15:46:54 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:11:19.712 15:46:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:19.712 15:46:54 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:19.712 15:46:54 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:20.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:21.213 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:21.213 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:21.213 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:21.213 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:21.471 15:46:56 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:21.471 15:46:56 nvme_scc -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:21.471 15:46:56 nvme_scc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:21.471 15:46:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:21.471 ************************************ 00:11:21.471 START TEST nvme_simple_copy 00:11:21.471 ************************************ 00:11:21.471 15:46:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:21.728 Initializing NVMe Controllers 00:11:21.728 Attaching to 0000:00:10.0 00:11:21.728 Controller supports SCC. Attached to 0000:00:10.0 00:11:21.728 Namespace ID: 1 size: 6GB 00:11:21.728 Initialization complete. 00:11:21.728 00:11:21.728 Controller QEMU NVMe Ctrl (12340 ) 00:11:21.728 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:21.728 Namespace Block Size:4096 00:11:21.728 Writing LBAs 0 to 63 with Random Data 00:11:21.728 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:21.728 LBAs matching Written Data: 64 00:11:21.728 00:11:21.728 real 0m0.260s 00:11:21.728 user 0m0.088s 00:11:21.728 sys 0m0.071s 00:11:21.728 15:46:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:21.728 15:46:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:21.728 ************************************ 00:11:21.728 END TEST nvme_simple_copy 00:11:21.728 ************************************ 00:11:21.728 00:11:21.728 real 0m8.793s 00:11:21.728 user 0m1.463s 00:11:21.728 sys 0m2.226s 00:11:21.728 15:46:56 nvme_scc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:21.728 15:46:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:21.728 ************************************ 00:11:21.728 END TEST nvme_scc 00:11:21.728 ************************************ 00:11:21.728 15:46:56 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:11:21.728 15:46:56 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:11:21.728 15:46:56 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:21.728 15:46:56 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:11:21.728 15:46:56 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:21.728 15:46:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:21.728 15:46:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:21.728 15:46:56 -- common/autotest_common.sh@10 -- # set +x 00:11:21.728 ************************************ 00:11:21.728 START TEST nvme_fdp 00:11:21.728 ************************************ 00:11:21.728 15:46:56 nvme_fdp -- common/autotest_common.sh@1121 -- # test/nvme/nvme_fdp.sh 00:11:21.986 * Looking for test storage... 00:11:21.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:21.987 15:46:56 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:21.987 15:46:56 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:21.987 15:46:56 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:21.987 15:46:56 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:21.987 15:46:56 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:21.987 15:46:56 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.987 15:46:56 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.987 15:46:56 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.987 15:46:56 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.987 15:46:56 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.987 15:46:56 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.987 15:46:56 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:21.987 15:46:56 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.987 15:46:56 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:21.987 15:46:56 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:21.987 15:46:56 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:21.987 15:46:56 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:21.987 15:46:56 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:21.987 15:46:56 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:21.987 15:46:56 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:21.987 15:46:56 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:21.987 15:46:56 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:21.987 15:46:56 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:21.987 15:46:56 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:22.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:22.811 Waiting for block devices as requested 00:11:22.811 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:22.811 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:23.070 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:23.070 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:28.346 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:28.346 15:47:02 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:28.346 15:47:02 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:28.346 15:47:02 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:28.346 15:47:02 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:28.346 15:47:02 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:28.346 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.347 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:28.348 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:28.349 15:47:02 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:28.349 15:47:02 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:28.349 15:47:02 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:28.349 15:47:02 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.349 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:28.350 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.351 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:02 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:28.352 15:47:03 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:28.352 15:47:03 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:28.352 15:47:03 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:28.352 15:47:03 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.352 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.353 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:28.354 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.355 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.356 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:28.357 15:47:03 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:28.357 15:47:03 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:28.357 15:47:03 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:28.357 15:47:03 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.357 15:47:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.617 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:28.618 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.619 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:28.620 15:47:03 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:11:28.620 15:47:03 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:11:28.620 15:47:03 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:28.620 15:47:03 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:28.620 15:47:03 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:29.187 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:29.795 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:29.795 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:30.053 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:30.054 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:30.054 15:47:04 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:30.054 15:47:04 nvme_fdp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:30.054 15:47:04 nvme_fdp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:30.054 15:47:04 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:30.054 ************************************ 00:11:30.054 START TEST nvme_flexible_data_placement 00:11:30.054 ************************************ 00:11:30.054 15:47:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:30.312 Initializing NVMe Controllers 00:11:30.312 Attaching to 0000:00:13.0 00:11:30.312 Controller supports FDP Attached to 0000:00:13.0 00:11:30.312 Namespace ID: 1 Endurance Group ID: 1 00:11:30.312 Initialization complete. 00:11:30.312 00:11:30.312 ================================== 00:11:30.312 == FDP tests for Namespace: #01 == 00:11:30.312 ================================== 00:11:30.312 00:11:30.312 Get Feature: FDP: 00:11:30.312 ================= 00:11:30.312 Enabled: Yes 00:11:30.312 FDP configuration Index: 0 00:11:30.312 00:11:30.312 FDP configurations log page 00:11:30.312 =========================== 00:11:30.312 Number of FDP configurations: 1 00:11:30.312 Version: 0 00:11:30.312 Size: 112 00:11:30.312 FDP Configuration Descriptor: 0 00:11:30.312 Descriptor Size: 96 00:11:30.312 Reclaim Group Identifier format: 2 00:11:30.312 FDP Volatile Write Cache: Not Present 00:11:30.312 FDP Configuration: Valid 00:11:30.312 Vendor Specific Size: 0 00:11:30.312 Number of Reclaim Groups: 2 00:11:30.312 Number of Recalim Unit Handles: 8 00:11:30.312 Max Placement Identifiers: 128 00:11:30.312 Number of Namespaces Suppprted: 256 00:11:30.312 Reclaim unit Nominal Size: 6000000 bytes 00:11:30.312 Estimated Reclaim Unit Time Limit: Not Reported 00:11:30.312 RUH Desc #000: RUH Type: Initially Isolated 00:11:30.312 RUH Desc #001: RUH Type: Initially Isolated 00:11:30.312 RUH Desc #002: RUH Type: Initially Isolated 00:11:30.312 RUH Desc #003: RUH Type: Initially Isolated 00:11:30.312 RUH Desc #004: RUH Type: Initially Isolated 00:11:30.312 RUH Desc #005: RUH Type: Initially Isolated 00:11:30.312 RUH Desc #006: RUH Type: Initially Isolated 00:11:30.312 RUH Desc #007: RUH Type: Initially Isolated 00:11:30.312 00:11:30.312 FDP reclaim unit handle usage log page 00:11:30.312 ====================================== 00:11:30.312 Number of Reclaim Unit Handles: 8 00:11:30.312 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:30.312 RUH Usage Desc #001: RUH Attributes: Unused 00:11:30.312 RUH Usage Desc #002: RUH Attributes: Unused 00:11:30.312 RUH Usage Desc #003: RUH Attributes: Unused 00:11:30.312 RUH Usage Desc #004: RUH Attributes: Unused 00:11:30.312 RUH Usage Desc #005: RUH Attributes: Unused 00:11:30.312 RUH Usage Desc #006: RUH Attributes: Unused 00:11:30.312 RUH Usage Desc #007: RUH Attributes: Unused 00:11:30.312 00:11:30.312 FDP statistics log page 00:11:30.312 ======================= 00:11:30.312 Host bytes with metadata written: 1602707456 00:11:30.312 Media bytes with metadata written: 1602924544 00:11:30.312 Media bytes erased: 0 00:11:30.312 00:11:30.312 FDP Reclaim unit handle status 00:11:30.312 ============================== 00:11:30.312 Number of RUHS descriptors: 2 00:11:30.312 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000078a 00:11:30.312 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:30.312 00:11:30.312 FDP write on placement id: 0 success 00:11:30.312 00:11:30.312 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:30.312 00:11:30.312 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:30.312 00:11:30.312 Get Feature: FDP Events for Placement handle: #0 00:11:30.312 ======================== 00:11:30.312 Number of FDP Events: 6 00:11:30.312 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:30.312 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:30.312 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:30.312 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:30.312 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:30.312 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:30.312 00:11:30.312 FDP events log page 00:11:30.312 =================== 00:11:30.312 Number of FDP events: 1 00:11:30.312 FDP Event #0: 00:11:30.312 Event Type: RU Not Written to Capacity 00:11:30.312 Placement Identifier: Valid 00:11:30.312 NSID: Valid 00:11:30.312 Location: Valid 00:11:30.312 Placement Identifier: 0 00:11:30.312 Event Timestamp: 4 00:11:30.312 Namespace Identifier: 1 00:11:30.312 Reclaim Group Identifier: 0 00:11:30.312 Reclaim Unit Handle Identifier: 0 00:11:30.312 00:11:30.312 FDP test passed 00:11:30.312 00:11:30.312 real 0m0.238s 00:11:30.312 user 0m0.062s 00:11:30.312 sys 0m0.076s 00:11:30.312 15:47:05 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:30.312 15:47:05 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:30.312 ************************************ 00:11:30.312 END TEST nvme_flexible_data_placement 00:11:30.312 ************************************ 00:11:30.571 00:11:30.571 real 0m8.704s 00:11:30.571 user 0m1.382s 00:11:30.571 sys 0m2.395s 00:11:30.571 15:47:05 nvme_fdp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:30.571 15:47:05 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:30.571 ************************************ 00:11:30.571 END TEST nvme_fdp 00:11:30.571 ************************************ 00:11:30.571 15:47:05 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:11:30.571 15:47:05 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:30.571 15:47:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:30.571 15:47:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:30.571 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:11:30.571 ************************************ 00:11:30.571 START TEST nvme_rpc 00:11:30.571 ************************************ 00:11:30.571 15:47:05 nvme_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:30.571 * Looking for test storage... 00:11:30.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:30.571 15:47:05 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:30.571 15:47:05 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:30.571 15:47:05 nvme_rpc -- common/autotest_common.sh@1520 -- # bdfs=() 00:11:30.571 15:47:05 nvme_rpc -- common/autotest_common.sh@1520 -- # local bdfs 00:11:30.571 15:47:05 nvme_rpc -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:11:30.571 15:47:05 nvme_rpc -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:11:30.571 15:47:05 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:30.571 15:47:05 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:11:30.571 15:47:05 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:30.571 15:47:05 nvme_rpc -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:30.571 15:47:05 nvme_rpc -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:30.830 15:47:05 nvme_rpc -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:11:30.830 15:47:05 nvme_rpc -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:30.830 15:47:05 nvme_rpc -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:11:30.830 15:47:05 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:30.830 15:47:05 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=82946 00:11:30.830 15:47:05 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:30.830 15:47:05 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:30.830 15:47:05 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 82946 00:11:30.830 15:47:05 nvme_rpc -- common/autotest_common.sh@827 -- # '[' -z 82946 ']' 00:11:30.830 15:47:05 nvme_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.830 15:47:05 nvme_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:30.830 15:47:05 nvme_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.830 15:47:05 nvme_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:30.830 15:47:05 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.830 [2024-07-20 15:47:05.518908] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:30.830 [2024-07-20 15:47:05.519042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82946 ] 00:11:31.088 [2024-07-20 15:47:05.670935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:31.088 [2024-07-20 15:47:05.714655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.088 [2024-07-20 15:47:05.714750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.660 15:47:06 nvme_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:31.660 15:47:06 nvme_rpc -- common/autotest_common.sh@860 -- # return 0 00:11:31.660 15:47:06 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:31.917 Nvme0n1 00:11:31.917 15:47:06 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:31.917 15:47:06 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:32.175 request: 00:11:32.175 { 00:11:32.175 "filename": "non_existing_file", 00:11:32.175 "bdev_name": "Nvme0n1", 00:11:32.175 "method": "bdev_nvme_apply_firmware", 00:11:32.175 "req_id": 1 00:11:32.175 } 00:11:32.175 Got JSON-RPC error response 00:11:32.175 response: 00:11:32.175 { 00:11:32.175 "code": -32603, 00:11:32.175 "message": "open file failed." 00:11:32.175 } 00:11:32.175 15:47:06 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:32.175 15:47:06 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:32.175 15:47:06 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:32.175 15:47:06 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:32.175 15:47:06 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 82946 00:11:32.175 15:47:06 nvme_rpc -- common/autotest_common.sh@946 -- # '[' -z 82946 ']' 00:11:32.175 15:47:06 nvme_rpc -- common/autotest_common.sh@950 -- # kill -0 82946 00:11:32.175 15:47:06 nvme_rpc -- common/autotest_common.sh@951 -- # uname 00:11:32.175 15:47:06 nvme_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:32.175 15:47:06 nvme_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82946 00:11:32.175 15:47:06 nvme_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:32.175 killing process with pid 82946 00:11:32.175 15:47:06 nvme_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:32.175 15:47:06 nvme_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82946' 00:11:32.175 15:47:06 nvme_rpc -- common/autotest_common.sh@965 -- # kill 82946 00:11:32.175 15:47:06 nvme_rpc -- common/autotest_common.sh@970 -- # wait 82946 00:11:32.740 00:11:32.740 real 0m2.150s 00:11:32.740 user 0m3.850s 00:11:32.740 sys 0m0.672s 00:11:32.740 ************************************ 00:11:32.740 END TEST nvme_rpc 00:11:32.740 ************************************ 00:11:32.740 15:47:07 nvme_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:32.740 15:47:07 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.740 15:47:07 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:32.740 15:47:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:32.740 15:47:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:32.740 15:47:07 -- common/autotest_common.sh@10 -- # set +x 00:11:32.740 ************************************ 00:11:32.740 START TEST nvme_rpc_timeouts 00:11:32.740 ************************************ 00:11:32.740 15:47:07 nvme_rpc_timeouts -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:32.998 * Looking for test storage... 00:11:32.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:32.998 15:47:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.998 15:47:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_83000 00:11:32.998 15:47:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_83000 00:11:32.998 15:47:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=83024 00:11:32.998 15:47:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:32.998 15:47:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:32.998 15:47:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 83024 00:11:32.998 15:47:07 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # '[' -z 83024 ']' 00:11:32.998 15:47:07 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.998 15:47:07 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:32.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.998 15:47:07 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.998 15:47:07 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:32.998 15:47:07 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:32.998 [2024-07-20 15:47:07.643677] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:32.998 [2024-07-20 15:47:07.643840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83024 ] 00:11:33.255 [2024-07-20 15:47:07.793002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:33.255 [2024-07-20 15:47:07.835530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.255 [2024-07-20 15:47:07.835604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.817 15:47:08 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:33.818 15:47:08 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # return 0 00:11:33.818 Checking default timeout settings: 00:11:33.818 15:47:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:33.818 15:47:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:34.075 Making settings changes with rpc: 00:11:34.075 15:47:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:34.075 15:47:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:34.332 Check default vs. modified settings: 00:11:34.332 15:47:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:34.332 15:47:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_83000 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_83000 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:34.604 Setting action_on_timeout is changed as expected. 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_83000 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_83000 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:34.604 Setting timeout_us is changed as expected. 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_83000 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_83000 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:34.604 Setting timeout_admin_us is changed as expected. 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_83000 /tmp/settings_modified_83000 00:11:34.604 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 83024 00:11:34.604 15:47:09 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # '[' -z 83024 ']' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # kill -0 83024 00:11:34.604 15:47:09 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # uname 00:11:34.604 15:47:09 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83024 00:11:34.604 15:47:09 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:34.604 killing process with pid 83024 00:11:34.604 15:47:09 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83024' 00:11:34.604 15:47:09 nvme_rpc_timeouts -- common/autotest_common.sh@965 -- # kill 83024 00:11:34.604 15:47:09 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # wait 83024 00:11:35.170 RPC TIMEOUT SETTING TEST PASSED. 00:11:35.170 15:47:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:35.170 00:11:35.170 real 0m2.300s 00:11:35.170 user 0m4.356s 00:11:35.170 sys 0m0.693s 00:11:35.170 15:47:09 nvme_rpc_timeouts -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:35.170 15:47:09 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:35.170 ************************************ 00:11:35.170 END TEST nvme_rpc_timeouts 00:11:35.170 ************************************ 00:11:35.170 15:47:09 -- spdk/autotest.sh@243 -- # uname -s 00:11:35.170 15:47:09 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:11:35.170 15:47:09 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:35.170 15:47:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:35.170 15:47:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:35.170 15:47:09 -- common/autotest_common.sh@10 -- # set +x 00:11:35.170 ************************************ 00:11:35.170 START TEST sw_hotplug 00:11:35.170 ************************************ 00:11:35.170 15:47:09 sw_hotplug -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:35.170 * Looking for test storage... 00:11:35.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:35.170 15:47:09 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:35.737 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:35.995 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:35.995 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:35.995 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:35.995 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:35.995 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # hotplug_wait=6 00:11:35.995 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # hotplug_events=3 00:11:35.995 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvmes=($(nvme_in_userspace)) 00:11:35.995 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvme_in_userspace 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@230 -- # local class 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:35.995 15:47:10 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:11:35.996 15:47:10 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:35.996 15:47:10 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:35.996 15:47:10 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:35.996 15:47:10 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:35.996 15:47:10 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:11:35.996 15:47:10 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:35.996 15:47:10 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:35.996 15:47:10 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:35.996 15:47:10 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:35.996 15:47:10 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:35.996 15:47:10 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:35.996 15:47:10 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:11:36.254 15:47:10 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@127 -- # nvme_count=2 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@128 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@130 -- # xtrace_disable 00:11:36.254 15:47:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # run_hotplug 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@65 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@73 -- # hotplug_pid=83368 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@75 -- # debug_remove_attach_helper 3 6 false 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 false 00:11:36.254 15:47:10 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:11:36.254 15:47:10 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:11:36.254 15:47:10 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:11:36.254 15:47:10 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 false 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=false 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:11:36.254 15:47:10 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:11:36.512 Initializing NVMe Controllers 00:11:36.512 Attaching to 0000:00:10.0 00:11:36.512 Attaching to 0000:00:11.0 00:11:36.512 Attaching to 0000:00:12.0 00:11:36.512 Attaching to 0000:00:13.0 00:11:36.512 Attached to 0000:00:10.0 00:11:36.512 Attached to 0000:00:11.0 00:11:36.512 Attached to 0000:00:13.0 00:11:36.512 Attached to 0000:00:12.0 00:11:36.512 Initialization complete. Starting I/O... 00:11:36.512 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:36.512 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:36.512 QEMU NVMe Ctrl (12343 ): 0 I/Os completed (+0) 00:11:36.512 QEMU NVMe Ctrl (12342 ): 0 I/Os completed (+0) 00:11:36.512 00:11:37.444 QEMU NVMe Ctrl (12340 ): 1472 I/Os completed (+1472) 00:11:37.444 QEMU NVMe Ctrl (12341 ): 1472 I/Os completed (+1472) 00:11:37.444 QEMU NVMe Ctrl (12343 ): 1476 I/Os completed (+1476) 00:11:37.444 QEMU NVMe Ctrl (12342 ): 1475 I/Os completed (+1475) 00:11:37.444 00:11:38.393 QEMU NVMe Ctrl (12340 ): 3236 I/Os completed (+1764) 00:11:38.393 QEMU NVMe Ctrl (12341 ): 3236 I/Os completed (+1764) 00:11:38.393 QEMU NVMe Ctrl (12343 ): 3240 I/Os completed (+1764) 00:11:38.393 QEMU NVMe Ctrl (12342 ): 3239 I/Os completed (+1764) 00:11:38.393 00:11:39.771 QEMU NVMe Ctrl (12340 ): 5220 I/Os completed (+1984) 00:11:39.771 QEMU NVMe Ctrl (12341 ): 5220 I/Os completed (+1984) 00:11:39.771 QEMU NVMe Ctrl (12343 ): 5227 I/Os completed (+1987) 00:11:39.771 QEMU NVMe Ctrl (12342 ): 5226 I/Os completed (+1987) 00:11:39.771 00:11:40.705 QEMU NVMe Ctrl (12340 ): 7180 I/Os completed (+1960) 00:11:40.705 QEMU NVMe Ctrl (12341 ): 7183 I/Os completed (+1963) 00:11:40.705 QEMU NVMe Ctrl (12343 ): 7192 I/Os completed (+1965) 00:11:40.705 QEMU NVMe Ctrl (12342 ): 7186 I/Os completed (+1960) 00:11:40.705 00:11:41.640 QEMU NVMe Ctrl (12340 ): 9176 I/Os completed (+1996) 00:11:41.640 QEMU NVMe Ctrl (12341 ): 9179 I/Os completed (+1996) 00:11:41.640 QEMU NVMe Ctrl (12343 ): 9188 I/Os completed (+1996) 00:11:41.640 QEMU NVMe Ctrl (12342 ): 9187 I/Os completed (+2001) 00:11:41.640 00:11:42.207 15:47:16 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:11:42.207 15:47:16 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:42.207 15:47:16 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:42.207 [2024-07-20 15:47:16.974187] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:42.207 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:42.207 [2024-07-20 15:47:16.975687] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.207 [2024-07-20 15:47:16.975740] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.207 [2024-07-20 15:47:16.975760] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.207 [2024-07-20 15:47:16.975778] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.207 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:42.207 [2024-07-20 15:47:16.977951] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.207 [2024-07-20 15:47:16.977987] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.207 [2024-07-20 15:47:16.978003] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.207 [2024-07-20 15:47:16.978022] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.465 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:42.465 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:42.465 [2024-07-20 15:47:17.014761] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:42.465 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:42.465 [2024-07-20 15:47:17.016121] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.465 [2024-07-20 15:47:17.016158] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.465 [2024-07-20 15:47:17.016177] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.465 [2024-07-20 15:47:17.016193] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.465 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:42.465 [2024-07-20 15:47:17.018107] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.465 [2024-07-20 15:47:17.018135] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.465 [2024-07-20 15:47:17.018167] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.465 [2024-07-20 15:47:17.018183] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.465 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:42.465 EAL: Scan for (pci) bus failed. 00:11:42.465 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:11:42.465 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:11:42.465 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:42.465 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:42.465 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:11:42.465 QEMU NVMe Ctrl (12343 ): 11216 I/Os completed (+2028) 00:11:42.465 QEMU NVMe Ctrl (12342 ): 11218 I/Os completed (+2031) 00:11:42.465 00:11:42.465 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:11:42.465 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:42.465 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:42.465 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:42.465 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:11:42.465 Attaching to 0000:00:10.0 00:11:42.465 Attached to 0000:00:10.0 00:11:42.723 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:11:42.723 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:42.723 15:47:17 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:11:42.723 Attaching to 0000:00:11.0 00:11:42.723 Attached to 0000:00:11.0 00:11:43.657 QEMU NVMe Ctrl (12343 ): 13264 I/Os completed (+2048) 00:11:43.657 QEMU NVMe Ctrl (12342 ): 13266 I/Os completed (+2048) 00:11:43.657 QEMU NVMe Ctrl (12340 ): 1813 I/Os completed (+1813) 00:11:43.657 QEMU NVMe Ctrl (12341 ): 1621 I/Os completed (+1621) 00:11:43.657 00:11:44.593 QEMU NVMe Ctrl (12343 ): 15244 I/Os completed (+1980) 00:11:44.593 QEMU NVMe Ctrl (12342 ): 15246 I/Os completed (+1980) 00:11:44.593 QEMU NVMe Ctrl (12340 ): 3793 I/Os completed (+1980) 00:11:44.593 QEMU NVMe Ctrl (12341 ): 3603 I/Os completed (+1982) 00:11:44.593 00:11:45.529 QEMU NVMe Ctrl (12343 ): 17224 I/Os completed (+1980) 00:11:45.529 QEMU NVMe Ctrl (12342 ): 17226 I/Os completed (+1980) 00:11:45.529 QEMU NVMe Ctrl (12340 ): 5773 I/Os completed (+1980) 00:11:45.529 QEMU NVMe Ctrl (12341 ): 5583 I/Os completed (+1980) 00:11:45.529 00:11:46.466 QEMU NVMe Ctrl (12343 ): 19216 I/Os completed (+1992) 00:11:46.466 QEMU NVMe Ctrl (12342 ): 19218 I/Os completed (+1992) 00:11:46.466 QEMU NVMe Ctrl (12340 ): 7765 I/Os completed (+1992) 00:11:46.466 QEMU NVMe Ctrl (12341 ): 7577 I/Os completed (+1994) 00:11:46.466 00:11:47.404 QEMU NVMe Ctrl (12343 ): 21196 I/Os completed (+1980) 00:11:47.404 QEMU NVMe Ctrl (12342 ): 21198 I/Os completed (+1980) 00:11:47.404 QEMU NVMe Ctrl (12340 ): 9745 I/Os completed (+1980) 00:11:47.404 QEMU NVMe Ctrl (12341 ): 9557 I/Os completed (+1980) 00:11:47.404 00:11:48.794 QEMU NVMe Ctrl (12343 ): 23184 I/Os completed (+1988) 00:11:48.794 QEMU NVMe Ctrl (12342 ): 23186 I/Os completed (+1988) 00:11:48.794 QEMU NVMe Ctrl (12340 ): 11733 I/Os completed (+1988) 00:11:48.794 QEMU NVMe Ctrl (12341 ): 11549 I/Os completed (+1992) 00:11:48.794 00:11:49.359 QEMU NVMe Ctrl (12343 ): 25160 I/Os completed (+1976) 00:11:49.359 QEMU NVMe Ctrl (12342 ): 25162 I/Os completed (+1976) 00:11:49.359 QEMU NVMe Ctrl (12340 ): 13709 I/Os completed (+1976) 00:11:49.359 QEMU NVMe Ctrl (12341 ): 13527 I/Os completed (+1978) 00:11:49.359 00:11:50.395 QEMU NVMe Ctrl (12343 ): 27128 I/Os completed (+1968) 00:11:50.395 QEMU NVMe Ctrl (12342 ): 27130 I/Os completed (+1968) 00:11:50.395 QEMU NVMe Ctrl (12340 ): 15679 I/Os completed (+1970) 00:11:50.395 QEMU NVMe Ctrl (12341 ): 15497 I/Os completed (+1970) 00:11:50.395 00:11:51.773 QEMU NVMe Ctrl (12343 ): 29060 I/Os completed (+1932) 00:11:51.773 QEMU NVMe Ctrl (12342 ): 29066 I/Os completed (+1936) 00:11:51.773 QEMU NVMe Ctrl (12340 ): 17613 I/Os completed (+1934) 00:11:51.773 QEMU NVMe Ctrl (12341 ): 17433 I/Os completed (+1936) 00:11:51.773 00:11:52.710 QEMU NVMe Ctrl (12343 ): 31028 I/Os completed (+1968) 00:11:52.710 QEMU NVMe Ctrl (12342 ): 31034 I/Os completed (+1968) 00:11:52.710 QEMU NVMe Ctrl (12340 ): 19581 I/Os completed (+1968) 00:11:52.710 QEMU NVMe Ctrl (12341 ): 19401 I/Os completed (+1968) 00:11:52.710 00:11:53.652 QEMU NVMe Ctrl (12343 ): 32948 I/Os completed (+1920) 00:11:53.652 QEMU NVMe Ctrl (12342 ): 32956 I/Os completed (+1922) 00:11:53.652 QEMU NVMe Ctrl (12340 ): 21508 I/Os completed (+1927) 00:11:53.652 QEMU NVMe Ctrl (12341 ): 21327 I/Os completed (+1926) 00:11:53.652 00:11:54.587 QEMU NVMe Ctrl (12343 ): 34904 I/Os completed (+1956) 00:11:54.587 QEMU NVMe Ctrl (12342 ): 34912 I/Os completed (+1956) 00:11:54.587 QEMU NVMe Ctrl (12340 ): 23470 I/Os completed (+1962) 00:11:54.587 QEMU NVMe Ctrl (12341 ): 23291 I/Os completed (+1964) 00:11:54.587 00:11:54.587 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:11:54.587 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:11:54.587 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:54.587 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:54.587 [2024-07-20 15:47:29.341693] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:54.587 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:54.587 [2024-07-20 15:47:29.343413] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.587 [2024-07-20 15:47:29.343466] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.587 [2024-07-20 15:47:29.343485] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.587 [2024-07-20 15:47:29.343508] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.587 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:54.587 [2024-07-20 15:47:29.345240] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.587 [2024-07-20 15:47:29.345279] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.587 [2024-07-20 15:47:29.345296] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.587 [2024-07-20 15:47:29.345314] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.587 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:54.587 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:54.846 [2024-07-20 15:47:29.382871] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:54.846 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:54.846 [2024-07-20 15:47:29.384684] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.846 [2024-07-20 15:47:29.384725] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.846 [2024-07-20 15:47:29.384745] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.846 [2024-07-20 15:47:29.384761] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.846 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:54.846 [2024-07-20 15:47:29.386299] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.846 [2024-07-20 15:47:29.386328] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.846 [2024-07-20 15:47:29.386349] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.846 [2024-07-20 15:47:29.386376] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.846 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:11:54.846 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:11:54.846 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:54.846 EAL: Scan for (pci) bus failed. 00:11:54.846 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:54.846 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:54.846 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:11:54.846 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:11:54.847 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:54.847 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:54.847 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:54.847 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:11:54.847 Attaching to 0000:00:10.0 00:11:54.847 Attached to 0000:00:10.0 00:11:55.119 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:11:55.119 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:55.119 15:47:29 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:11:55.119 Attaching to 0000:00:11.0 00:11:55.119 Attached to 0000:00:11.0 00:11:55.378 QEMU NVMe Ctrl (12343 ): 36980 I/Os completed (+2076) 00:11:55.378 QEMU NVMe Ctrl (12342 ): 36991 I/Os completed (+2079) 00:11:55.378 QEMU NVMe Ctrl (12340 ): 1024 I/Os completed (+1024) 00:11:55.378 QEMU NVMe Ctrl (12341 ): 796 I/Os completed (+796) 00:11:55.378 00:11:56.755 QEMU NVMe Ctrl (12343 ): 38952 I/Os completed (+1972) 00:11:56.755 QEMU NVMe Ctrl (12342 ): 38963 I/Os completed (+1972) 00:11:56.755 QEMU NVMe Ctrl (12340 ): 2998 I/Os completed (+1974) 00:11:56.755 QEMU NVMe Ctrl (12341 ): 2772 I/Os completed (+1976) 00:11:56.755 00:11:57.691 QEMU NVMe Ctrl (12343 ): 40920 I/Os completed (+1968) 00:11:57.691 QEMU NVMe Ctrl (12342 ): 40931 I/Os completed (+1968) 00:11:57.691 QEMU NVMe Ctrl (12340 ): 4969 I/Os completed (+1971) 00:11:57.691 QEMU NVMe Ctrl (12341 ): 4741 I/Os completed (+1969) 00:11:57.691 00:11:58.626 QEMU NVMe Ctrl (12343 ): 42872 I/Os completed (+1952) 00:11:58.626 QEMU NVMe Ctrl (12342 ): 42883 I/Os completed (+1952) 00:11:58.626 QEMU NVMe Ctrl (12340 ): 6921 I/Os completed (+1952) 00:11:58.626 QEMU NVMe Ctrl (12341 ): 6693 I/Os completed (+1952) 00:11:58.626 00:11:59.562 QEMU NVMe Ctrl (12343 ): 44852 I/Os completed (+1980) 00:11:59.562 QEMU NVMe Ctrl (12342 ): 44865 I/Os completed (+1982) 00:11:59.562 QEMU NVMe Ctrl (12340 ): 8901 I/Os completed (+1980) 00:11:59.562 QEMU NVMe Ctrl (12341 ): 8673 I/Os completed (+1980) 00:11:59.562 00:12:00.497 QEMU NVMe Ctrl (12343 ): 46824 I/Os completed (+1972) 00:12:00.497 QEMU NVMe Ctrl (12342 ): 46837 I/Os completed (+1972) 00:12:00.497 QEMU NVMe Ctrl (12340 ): 10877 I/Os completed (+1976) 00:12:00.497 QEMU NVMe Ctrl (12341 ): 10645 I/Os completed (+1972) 00:12:00.497 00:12:01.433 QEMU NVMe Ctrl (12343 ): 48808 I/Os completed (+1984) 00:12:01.433 QEMU NVMe Ctrl (12342 ): 48821 I/Os completed (+1984) 00:12:01.433 QEMU NVMe Ctrl (12340 ): 12869 I/Os completed (+1992) 00:12:01.433 QEMU NVMe Ctrl (12341 ): 12633 I/Os completed (+1988) 00:12:01.433 00:12:02.368 QEMU NVMe Ctrl (12343 ): 50780 I/Os completed (+1972) 00:12:02.368 QEMU NVMe Ctrl (12342 ): 50793 I/Os completed (+1972) 00:12:02.368 QEMU NVMe Ctrl (12340 ): 14841 I/Os completed (+1972) 00:12:02.368 QEMU NVMe Ctrl (12341 ): 14611 I/Os completed (+1978) 00:12:02.368 00:12:03.743 QEMU NVMe Ctrl (12343 ): 52756 I/Os completed (+1976) 00:12:03.743 QEMU NVMe Ctrl (12342 ): 52769 I/Os completed (+1976) 00:12:03.743 QEMU NVMe Ctrl (12340 ): 16817 I/Os completed (+1976) 00:12:03.743 QEMU NVMe Ctrl (12341 ): 16594 I/Os completed (+1983) 00:12:03.743 00:12:04.677 QEMU NVMe Ctrl (12343 ): 54699 I/Os completed (+1943) 00:12:04.677 QEMU NVMe Ctrl (12342 ): 54709 I/Os completed (+1940) 00:12:04.677 QEMU NVMe Ctrl (12340 ): 18773 I/Os completed (+1956) 00:12:04.677 QEMU NVMe Ctrl (12341 ): 18545 I/Os completed (+1951) 00:12:04.677 00:12:05.610 QEMU NVMe Ctrl (12343 ): 56643 I/Os completed (+1944) 00:12:05.610 QEMU NVMe Ctrl (12342 ): 56655 I/Os completed (+1946) 00:12:05.610 QEMU NVMe Ctrl (12340 ): 20724 I/Os completed (+1951) 00:12:05.610 QEMU NVMe Ctrl (12341 ): 20493 I/Os completed (+1948) 00:12:05.610 00:12:06.545 QEMU NVMe Ctrl (12343 ): 58547 I/Os completed (+1904) 00:12:06.545 QEMU NVMe Ctrl (12342 ): 58563 I/Os completed (+1908) 00:12:06.545 QEMU NVMe Ctrl (12340 ): 22628 I/Os completed (+1904) 00:12:06.545 QEMU NVMe Ctrl (12341 ): 22408 I/Os completed (+1915) 00:12:06.545 00:12:07.112 15:47:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:12:07.112 15:47:41 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:07.112 15:47:41 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:07.112 15:47:41 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:07.112 [2024-07-20 15:47:41.739135] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:07.112 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:07.112 [2024-07-20 15:47:41.741048] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 [2024-07-20 15:47:41.741100] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 [2024-07-20 15:47:41.741119] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 [2024-07-20 15:47:41.741142] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:07.112 [2024-07-20 15:47:41.743119] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 [2024-07-20 15:47:41.743157] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 [2024-07-20 15:47:41.743174] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 [2024-07-20 15:47:41.743192] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 15:47:41 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:07.112 15:47:41 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:07.112 [2024-07-20 15:47:41.789849] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:07.112 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:07.112 [2024-07-20 15:47:41.791471] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 [2024-07-20 15:47:41.791515] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 [2024-07-20 15:47:41.791536] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 [2024-07-20 15:47:41.791552] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:07.112 [2024-07-20 15:47:41.793424] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 [2024-07-20 15:47:41.793462] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 [2024-07-20 15:47:41.793481] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 [2024-07-20 15:47:41.793496] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.112 15:47:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:12:07.112 15:47:41 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:12:07.370 15:47:41 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:07.370 15:47:41 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:07.370 15:47:41 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:12:07.370 15:47:42 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:12:07.370 15:47:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:07.370 15:47:42 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:12:07.370 15:47:42 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:12:07.370 15:47:42 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:12:07.370 Attaching to 0000:00:10.0 00:12:07.370 Attached to 0000:00:10.0 00:12:07.370 QEMU NVMe Ctrl (12343 ): 60627 I/Os completed (+2080) 00:12:07.370 QEMU NVMe Ctrl (12342 ): 60643 I/Os completed (+2080) 00:12:07.370 QEMU NVMe Ctrl (12340 ): 160 I/Os completed (+160) 00:12:07.370 00:12:07.370 15:47:42 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:12:07.370 15:47:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:12:07.370 15:47:42 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:12:07.370 Attaching to 0000:00:11.0 00:12:07.370 Attached to 0000:00:11.0 00:12:07.370 unregister_dev: QEMU NVMe Ctrl (12343 ) 00:12:07.370 unregister_dev: QEMU NVMe Ctrl (12342 ) 00:12:07.370 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:07.370 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:07.370 [2024-07-20 15:47:42.151615] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:19.576 15:47:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:12:19.576 15:47:54 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:19.576 15:47:54 sw_hotplug -- common/autotest_common.sh@714 -- # time=43.17 00:12:19.576 15:47:54 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.17 00:12:19.576 15:47:54 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=43.17 00:12:19.576 15:47:54 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.17 2 00:12:19.576 remove_attach_helper took 43.17s to complete (handling 2 nvme drive(s)) 15:47:54 sw_hotplug -- nvme/sw_hotplug.sh@79 -- # sleep 6 00:12:26.178 15:48:00 sw_hotplug -- nvme/sw_hotplug.sh@81 -- # kill -0 83368 00:12:26.178 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 81: kill: (83368) - No such process 00:12:26.178 15:48:00 sw_hotplug -- nvme/sw_hotplug.sh@83 -- # wait 83368 00:12:26.178 15:48:00 sw_hotplug -- nvme/sw_hotplug.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:26.178 15:48:00 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # tgt_run_hotplug 00:12:26.178 15:48:00 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # local dev 00:12:26.178 15:48:00 sw_hotplug -- nvme/sw_hotplug.sh@98 -- # spdk_tgt_pid=83912 00:12:26.178 15:48:00 sw_hotplug -- nvme/sw_hotplug.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:26.178 15:48:00 sw_hotplug -- nvme/sw_hotplug.sh@100 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:26.178 15:48:00 sw_hotplug -- nvme/sw_hotplug.sh@101 -- # waitforlisten 83912 00:12:26.178 15:48:00 sw_hotplug -- common/autotest_common.sh@827 -- # '[' -z 83912 ']' 00:12:26.178 15:48:00 sw_hotplug -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.178 15:48:00 sw_hotplug -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:26.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.178 15:48:00 sw_hotplug -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.178 15:48:00 sw_hotplug -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:26.178 15:48:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.178 [2024-07-20 15:48:00.252390] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:26.178 [2024-07-20 15:48:00.252505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83912 ] 00:12:26.178 [2024-07-20 15:48:00.401793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.178 [2024-07-20 15:48:00.443091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.436 15:48:01 sw_hotplug -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:26.436 15:48:01 sw_hotplug -- common/autotest_common.sh@860 -- # return 0 00:12:26.436 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:12:26.436 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme00 -t PCIe -a 0000:00:10.0 00:12:26.436 15:48:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.436 15:48:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.436 Nvme00n1 00:12:26.436 15:48:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.436 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme00n1 6 00:12:26.436 15:48:01 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme00n1 00:12:26.436 15:48:01 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:12:26.436 15:48:01 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:12:26.436 15:48:01 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme00n1 -t 6 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.437 [ 00:12:26.437 { 00:12:26.437 "name": "Nvme00n1", 00:12:26.437 "aliases": [ 00:12:26.437 "1ec6126a-ea2e-4cbb-87e8-c9c735dbdc44" 00:12:26.437 ], 00:12:26.437 "product_name": "NVMe disk", 00:12:26.437 "block_size": 4096, 00:12:26.437 "num_blocks": 1548666, 00:12:26.437 "uuid": "1ec6126a-ea2e-4cbb-87e8-c9c735dbdc44", 00:12:26.437 "md_size": 64, 00:12:26.437 "md_interleave": false, 00:12:26.437 "dif_type": 0, 00:12:26.437 "assigned_rate_limits": { 00:12:26.437 "rw_ios_per_sec": 0, 00:12:26.437 "rw_mbytes_per_sec": 0, 00:12:26.437 "r_mbytes_per_sec": 0, 00:12:26.437 "w_mbytes_per_sec": 0 00:12:26.437 }, 00:12:26.437 "claimed": false, 00:12:26.437 "zoned": false, 00:12:26.437 "supported_io_types": { 00:12:26.437 "read": true, 00:12:26.437 "write": true, 00:12:26.437 "unmap": true, 00:12:26.437 "write_zeroes": true, 00:12:26.437 "flush": true, 00:12:26.437 "reset": true, 00:12:26.437 "compare": true, 00:12:26.437 "compare_and_write": false, 00:12:26.437 "abort": true, 00:12:26.437 "nvme_admin": true, 00:12:26.437 "nvme_io": true 00:12:26.437 }, 00:12:26.437 "driver_specific": { 00:12:26.437 "nvme": [ 00:12:26.437 { 00:12:26.437 "pci_address": "0000:00:10.0", 00:12:26.437 "trid": { 00:12:26.437 "trtype": "PCIe", 00:12:26.437 "traddr": "0000:00:10.0" 00:12:26.437 }, 00:12:26.437 "ctrlr_data": { 00:12:26.437 "cntlid": 0, 00:12:26.437 "vendor_id": "0x1b36", 00:12:26.437 "model_number": "QEMU NVMe Ctrl", 00:12:26.437 "serial_number": "12340", 00:12:26.437 "firmware_revision": "8.0.0", 00:12:26.437 "subnqn": "nqn.2019-08.org.qemu:12340", 00:12:26.437 "oacs": { 00:12:26.437 "security": 0, 00:12:26.437 "format": 1, 00:12:26.437 "firmware": 0, 00:12:26.437 "ns_manage": 1 00:12:26.437 }, 00:12:26.437 "multi_ctrlr": false, 00:12:26.437 "ana_reporting": false 00:12:26.437 }, 00:12:26.437 "vs": { 00:12:26.437 "nvme_version": "1.4" 00:12:26.437 }, 00:12:26.437 "ns_data": { 00:12:26.437 "id": 1, 00:12:26.437 "can_share": false 00:12:26.437 } 00:12:26.437 } 00:12:26.437 ], 00:12:26.437 "mp_policy": "active_passive" 00:12:26.437 } 00:12:26.437 } 00:12:26.437 ] 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:12:26.437 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:12:26.437 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme01 -t PCIe -a 0000:00:11.0 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.437 Nvme01n1 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.437 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme01n1 6 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme01n1 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme01n1 -t 6 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.437 15:48:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.437 [ 00:12:26.437 { 00:12:26.437 "name": "Nvme01n1", 00:12:26.437 "aliases": [ 00:12:26.437 "1bf8ad42-9232-4bf6-9dbe-8474086fa589" 00:12:26.437 ], 00:12:26.437 "product_name": "NVMe disk", 00:12:26.437 "block_size": 4096, 00:12:26.437 "num_blocks": 1310720, 00:12:26.437 "uuid": "1bf8ad42-9232-4bf6-9dbe-8474086fa589", 00:12:26.437 "assigned_rate_limits": { 00:12:26.437 "rw_ios_per_sec": 0, 00:12:26.437 "rw_mbytes_per_sec": 0, 00:12:26.437 "r_mbytes_per_sec": 0, 00:12:26.437 "w_mbytes_per_sec": 0 00:12:26.437 }, 00:12:26.437 "claimed": false, 00:12:26.437 "zoned": false, 00:12:26.437 "supported_io_types": { 00:12:26.437 "read": true, 00:12:26.437 "write": true, 00:12:26.437 "unmap": true, 00:12:26.437 "write_zeroes": true, 00:12:26.437 "flush": true, 00:12:26.437 "reset": true, 00:12:26.437 "compare": true, 00:12:26.695 "compare_and_write": false, 00:12:26.695 "abort": true, 00:12:26.695 "nvme_admin": true, 00:12:26.695 "nvme_io": true 00:12:26.695 }, 00:12:26.695 "driver_specific": { 00:12:26.695 "nvme": [ 00:12:26.695 { 00:12:26.695 "pci_address": "0000:00:11.0", 00:12:26.695 "trid": { 00:12:26.695 "trtype": "PCIe", 00:12:26.695 "traddr": "0000:00:11.0" 00:12:26.695 }, 00:12:26.695 "ctrlr_data": { 00:12:26.695 "cntlid": 0, 00:12:26.695 "vendor_id": "0x1b36", 00:12:26.695 "model_number": "QEMU NVMe Ctrl", 00:12:26.695 "serial_number": "12341", 00:12:26.695 "firmware_revision": "8.0.0", 00:12:26.695 "subnqn": "nqn.2019-08.org.qemu:12341", 00:12:26.695 "oacs": { 00:12:26.695 "security": 0, 00:12:26.695 "format": 1, 00:12:26.695 "firmware": 0, 00:12:26.695 "ns_manage": 1 00:12:26.695 }, 00:12:26.695 "multi_ctrlr": false, 00:12:26.695 "ana_reporting": false 00:12:26.695 }, 00:12:26.695 "vs": { 00:12:26.695 "nvme_version": "1.4" 00:12:26.695 }, 00:12:26.695 "ns_data": { 00:12:26.695 "id": 1, 00:12:26.695 "can_share": false 00:12:26.695 } 00:12:26.695 } 00:12:26.695 ], 00:12:26.695 "mp_policy": "active_passive" 00:12:26.695 } 00:12:26.695 } 00:12:26.695 ] 00:12:26.695 15:48:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.695 15:48:01 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:12:26.695 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@108 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:26.695 15:48:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.695 15:48:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.695 15:48:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.695 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # debug_remove_attach_helper 3 6 true 00:12:26.695 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:12:26.695 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:12:26.695 15:48:01 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:12:26.695 15:48:01 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:12:26.695 15:48:01 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:12:26.695 15:48:01 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:12:26.695 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:12:26.695 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:12:26.695 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:12:26.695 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:12:26.695 15:48:01 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:12:33.280 15:48:07 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:33.280 15:48:07 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:33.280 15:48:07 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:33.280 15:48:07 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:33.280 15:48:07 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:33.280 15:48:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:12:33.280 15:48:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:12:33.280 [2024-07-20 15:48:07.340942] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:33.280 [2024-07-20 15:48:07.343117] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.280 [2024-07-20 15:48:07.343274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.280 [2024-07-20 15:48:07.343411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.280 [2024-07-20 15:48:07.343448] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.280 [2024-07-20 15:48:07.343463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.280 [2024-07-20 15:48:07.343480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.280 [2024-07-20 15:48:07.343492] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.280 [2024-07-20 15:48:07.343510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.280 [2024-07-20 15:48:07.343523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.280 [2024-07-20 15:48:07.343538] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.280 [2024-07-20 15:48:07.343550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.280 [2024-07-20 15:48:07.343565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.280 [2024-07-20 15:48:07.740293] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:33.280 [2024-07-20 15:48:07.742215] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.280 [2024-07-20 15:48:07.742262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.280 [2024-07-20 15:48:07.742295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.280 [2024-07-20 15:48:07.742311] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.280 [2024-07-20 15:48:07.742324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.280 [2024-07-20 15:48:07.742336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.280 [2024-07-20 15:48:07.742350] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.280 [2024-07-20 15:48:07.742361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.280 [2024-07-20 15:48:07.742374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.280 [2024-07-20 15:48:07.742395] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.280 [2024-07-20 15:48:07.742411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.280 [2024-07-20 15:48:07.742423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.839 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:12:39.839 15:48:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.839 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:12:39.839 15:48:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.839 15:48:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.839 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:12:39.839 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:12:39.839 15:48:13 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:12:39.839 15:48:13 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:12:39.839 15:48:13 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:39.839 15:48:13 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:39.839 15:48:13 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.14 00:12:39.839 15:48:13 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:12:39.839 15:48:13 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:12:39.839 15:48:13 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:39.839 15:48:13 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:39.840 15:48:13 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.14 00:12:39.840 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.14 00:12:39.840 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.14 2 00:12:39.840 remove_attach_helper took 12.14s to complete (handling 2 nvme drive(s)) 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:39.840 15:48:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.840 15:48:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.840 15:48:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.840 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:39.840 15:48:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.840 15:48:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.840 15:48:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.840 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # debug_remove_attach_helper 3 6 true 00:12:39.840 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:12:39.840 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:12:39.840 15:48:13 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:12:39.840 15:48:13 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:12:39.840 15:48:13 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:12:39.840 15:48:13 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:12:39.840 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:12:39.840 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:12:39.840 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:12:39.840 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:12:39.840 15:48:13 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:12:45.105 15:48:19 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:45.105 15:48:19 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:45.105 15:48:19 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:45.105 15:48:19 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # trap - ERR 00:12:45.105 15:48:19 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # print_backtrace 00:12:45.105 15:48:19 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:45.105 15:48:19 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:45.105 15:48:19 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:45.105 15:48:19 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:45.105 15:48:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:12:45.105 15:48:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:12:51.687 15:48:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.687 15:48:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.687 15:48:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:12:51.687 15:48:25 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.08 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.08 00:12:51.687 15:48:25 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.08 00:12:51.687 15:48:25 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.08 2 00:12:51.687 remove_attach_helper took 12.08s to complete (handling 2 nvme drive(s)) 15:48:25 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:12:51.687 15:48:25 sw_hotplug -- nvme/sw_hotplug.sh@118 -- # killprocess 83912 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@946 -- # '[' -z 83912 ']' 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@950 -- # kill -0 83912 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@951 -- # uname 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83912 00:12:51.687 killing process with pid 83912 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83912' 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@965 -- # kill 83912 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@970 -- # wait 83912 00:12:51.687 ************************************ 00:12:51.687 END TEST sw_hotplug 00:12:51.687 ************************************ 00:12:51.687 00:12:51.687 real 1m16.149s 00:12:51.687 user 0m44.115s 00:12:51.687 sys 0m15.196s 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:51.687 15:48:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:51.687 15:48:26 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:12:51.687 15:48:26 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:51.687 15:48:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:51.687 15:48:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:51.687 15:48:26 -- common/autotest_common.sh@10 -- # set +x 00:12:51.687 ************************************ 00:12:51.687 START TEST nvme_xnvme 00:12:51.687 ************************************ 00:12:51.687 15:48:26 nvme_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:51.687 * Looking for test storage... 00:12:51.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:51.687 15:48:26 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:51.687 15:48:26 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.687 15:48:26 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.687 15:48:26 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.687 15:48:26 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.687 15:48:26 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.687 15:48:26 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.687 15:48:26 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:51.687 15:48:26 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.687 15:48:26 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:51.687 15:48:26 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:51.687 15:48:26 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:51.687 15:48:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:51.687 ************************************ 00:12:51.687 START TEST xnvme_to_malloc_dd_copy 00:12:51.687 ************************************ 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1121 -- # malloc_to_xnvme_copy 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:51.687 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:51.688 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:51.688 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:51.688 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:51.688 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:51.688 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:51.688 15:48:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:51.688 { 00:12:51.688 "subsystems": [ 00:12:51.688 { 00:12:51.688 "subsystem": "bdev", 00:12:51.688 "config": [ 00:12:51.688 { 00:12:51.688 "params": { 00:12:51.688 "block_size": 512, 00:12:51.688 "num_blocks": 2097152, 00:12:51.688 "name": "malloc0" 00:12:51.688 }, 00:12:51.688 "method": "bdev_malloc_create" 00:12:51.688 }, 00:12:51.688 { 00:12:51.688 "params": { 00:12:51.688 "io_mechanism": "libaio", 00:12:51.688 "filename": "/dev/nullb0", 00:12:51.688 "name": "null0" 00:12:51.688 }, 00:12:51.688 "method": "bdev_xnvme_create" 00:12:51.688 }, 00:12:51.688 { 00:12:51.688 "method": "bdev_wait_for_examine" 00:12:51.688 } 00:12:51.688 ] 00:12:51.688 } 00:12:51.688 ] 00:12:51.688 } 00:12:51.688 [2024-07-20 15:48:26.277336] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:51.688 [2024-07-20 15:48:26.277588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84267 ] 00:12:51.688 [2024-07-20 15:48:26.425147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.688 [2024-07-20 15:48:26.465883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.514  Copying: 267/1024 [MB] (267 MBps) Copying: 537/1024 [MB] (270 MBps) Copying: 807/1024 [MB] (270 MBps) Copying: 1024/1024 [MB] (average 269 MBps) 00:12:56.514 00:12:56.514 15:48:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:56.514 15:48:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:56.514 15:48:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:56.514 15:48:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:56.514 { 00:12:56.514 "subsystems": [ 00:12:56.514 { 00:12:56.514 "subsystem": "bdev", 00:12:56.514 "config": [ 00:12:56.514 { 00:12:56.514 "params": { 00:12:56.514 "block_size": 512, 00:12:56.514 "num_blocks": 2097152, 00:12:56.514 "name": "malloc0" 00:12:56.514 }, 00:12:56.514 "method": "bdev_malloc_create" 00:12:56.514 }, 00:12:56.514 { 00:12:56.514 "params": { 00:12:56.514 "io_mechanism": "libaio", 00:12:56.514 "filename": "/dev/nullb0", 00:12:56.514 "name": "null0" 00:12:56.514 }, 00:12:56.514 "method": "bdev_xnvme_create" 00:12:56.514 }, 00:12:56.514 { 00:12:56.514 "method": "bdev_wait_for_examine" 00:12:56.514 } 00:12:56.514 ] 00:12:56.514 } 00:12:56.514 ] 00:12:56.514 } 00:12:56.514 [2024-07-20 15:48:31.254026] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:56.514 [2024-07-20 15:48:31.254149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84332 ] 00:12:56.774 [2024-07-20 15:48:31.403562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.774 [2024-07-20 15:48:31.444116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.520  Copying: 272/1024 [MB] (272 MBps) Copying: 549/1024 [MB] (276 MBps) Copying: 826/1024 [MB] (276 MBps) Copying: 1024/1024 [MB] (average 275 MBps) 00:13:01.520 00:13:01.520 15:48:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:01.520 15:48:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:01.520 15:48:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:01.520 15:48:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:01.520 15:48:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:01.520 15:48:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:01.520 { 00:13:01.520 "subsystems": [ 00:13:01.520 { 00:13:01.520 "subsystem": "bdev", 00:13:01.520 "config": [ 00:13:01.520 { 00:13:01.520 "params": { 00:13:01.520 "block_size": 512, 00:13:01.520 "num_blocks": 2097152, 00:13:01.520 "name": "malloc0" 00:13:01.520 }, 00:13:01.520 "method": "bdev_malloc_create" 00:13:01.520 }, 00:13:01.520 { 00:13:01.520 "params": { 00:13:01.520 "io_mechanism": "io_uring", 00:13:01.520 "filename": "/dev/nullb0", 00:13:01.520 "name": "null0" 00:13:01.520 }, 00:13:01.520 "method": "bdev_xnvme_create" 00:13:01.520 }, 00:13:01.520 { 00:13:01.520 "method": "bdev_wait_for_examine" 00:13:01.520 } 00:13:01.520 ] 00:13:01.520 } 00:13:01.520 ] 00:13:01.520 } 00:13:01.520 [2024-07-20 15:48:36.143523] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:01.520 [2024-07-20 15:48:36.143641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84388 ] 00:13:01.520 [2024-07-20 15:48:36.290618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.779 [2024-07-20 15:48:36.331836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.239  Copying: 283/1024 [MB] (283 MBps) Copying: 569/1024 [MB] (286 MBps) Copying: 855/1024 [MB] (285 MBps) Copying: 1024/1024 [MB] (average 283 MBps) 00:13:06.239 00:13:06.239 15:48:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:06.239 15:48:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:06.239 15:48:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:06.239 15:48:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:06.239 { 00:13:06.239 "subsystems": [ 00:13:06.239 { 00:13:06.239 "subsystem": "bdev", 00:13:06.239 "config": [ 00:13:06.239 { 00:13:06.239 "params": { 00:13:06.239 "block_size": 512, 00:13:06.239 "num_blocks": 2097152, 00:13:06.239 "name": "malloc0" 00:13:06.239 }, 00:13:06.239 "method": "bdev_malloc_create" 00:13:06.239 }, 00:13:06.239 { 00:13:06.239 "params": { 00:13:06.239 "io_mechanism": "io_uring", 00:13:06.239 "filename": "/dev/nullb0", 00:13:06.239 "name": "null0" 00:13:06.239 }, 00:13:06.239 "method": "bdev_xnvme_create" 00:13:06.239 }, 00:13:06.239 { 00:13:06.239 "method": "bdev_wait_for_examine" 00:13:06.239 } 00:13:06.239 ] 00:13:06.239 } 00:13:06.239 ] 00:13:06.239 } 00:13:06.239 [2024-07-20 15:48:40.875006] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:06.239 [2024-07-20 15:48:40.875582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84448 ] 00:13:06.239 [2024-07-20 15:48:41.025840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.498 [2024-07-20 15:48:41.067565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.874  Copying: 287/1024 [MB] (287 MBps) Copying: 575/1024 [MB] (288 MBps) Copying: 864/1024 [MB] (288 MBps) Copying: 1024/1024 [MB] (average 288 MBps) 00:13:10.874 00:13:10.874 15:48:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:10.874 15:48:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:10.874 ************************************ 00:13:10.874 END TEST xnvme_to_malloc_dd_copy 00:13:10.874 ************************************ 00:13:10.874 00:13:10.874 real 0m19.341s 00:13:10.874 user 0m15.172s 00:13:10.874 sys 0m3.752s 00:13:10.874 15:48:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:10.874 15:48:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:10.874 15:48:45 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:10.874 15:48:45 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:10.874 15:48:45 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:10.874 15:48:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.874 ************************************ 00:13:10.874 START TEST xnvme_bdevperf 00:13:10.874 ************************************ 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1121 -- # xnvme_bdevperf 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:10.874 15:48:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:10.874 { 00:13:10.874 "subsystems": [ 00:13:10.874 { 00:13:10.874 "subsystem": "bdev", 00:13:10.874 "config": [ 00:13:10.874 { 00:13:10.874 "params": { 00:13:10.874 "io_mechanism": "libaio", 00:13:10.874 "filename": "/dev/nullb0", 00:13:10.874 "name": "null0" 00:13:10.874 }, 00:13:10.874 "method": "bdev_xnvme_create" 00:13:10.874 }, 00:13:10.874 { 00:13:10.874 "method": "bdev_wait_for_examine" 00:13:10.874 } 00:13:10.874 ] 00:13:10.874 } 00:13:10.874 ] 00:13:10.874 } 00:13:11.132 [2024-07-20 15:48:45.685018] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:11.132 [2024-07-20 15:48:45.685415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84538 ] 00:13:11.132 [2024-07-20 15:48:45.836589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.132 [2024-07-20 15:48:45.877478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.390 Running I/O for 5 seconds... 00:13:16.658 00:13:16.658 Latency(us) 00:13:16.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.658 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:16.658 null0 : 5.00 167477.99 654.21 0.00 0.00 379.81 115.15 1243.60 00:13:16.658 =================================================================================================================== 00:13:16.658 Total : 167477.99 654.21 0.00 0.00 379.81 115.15 1243.60 00:13:16.658 15:48:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:16.658 15:48:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:16.658 15:48:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:16.658 15:48:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:16.658 15:48:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:16.658 15:48:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:16.658 { 00:13:16.658 "subsystems": [ 00:13:16.658 { 00:13:16.658 "subsystem": "bdev", 00:13:16.658 "config": [ 00:13:16.658 { 00:13:16.658 "params": { 00:13:16.658 "io_mechanism": "io_uring", 00:13:16.658 "filename": "/dev/nullb0", 00:13:16.658 "name": "null0" 00:13:16.658 }, 00:13:16.658 "method": "bdev_xnvme_create" 00:13:16.658 }, 00:13:16.658 { 00:13:16.658 "method": "bdev_wait_for_examine" 00:13:16.658 } 00:13:16.658 ] 00:13:16.658 } 00:13:16.658 ] 00:13:16.658 } 00:13:16.658 [2024-07-20 15:48:51.300460] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:16.658 [2024-07-20 15:48:51.300582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84601 ] 00:13:16.658 [2024-07-20 15:48:51.450576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.916 [2024-07-20 15:48:51.491927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.916 Running I/O for 5 seconds... 00:13:22.183 00:13:22.184 Latency(us) 00:13:22.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.184 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:22.184 null0 : 5.00 213303.55 833.22 0.00 0.00 297.76 175.19 411.24 00:13:22.184 =================================================================================================================== 00:13:22.184 Total : 213303.55 833.22 0.00 0.00 297.76 175.19 411.24 00:13:22.184 15:48:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:22.184 15:48:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:22.184 00:13:22.184 real 0m11.256s 00:13:22.184 user 0m8.016s 00:13:22.184 sys 0m3.041s 00:13:22.184 ************************************ 00:13:22.184 END TEST xnvme_bdevperf 00:13:22.184 ************************************ 00:13:22.184 15:48:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:22.184 15:48:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:22.184 ************************************ 00:13:22.184 END TEST nvme_xnvme 00:13:22.184 ************************************ 00:13:22.184 00:13:22.184 real 0m30.876s 00:13:22.184 user 0m23.291s 00:13:22.184 sys 0m6.971s 00:13:22.184 15:48:56 nvme_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:22.184 15:48:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.184 15:48:56 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:22.184 15:48:56 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:22.184 15:48:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:22.184 15:48:56 -- common/autotest_common.sh@10 -- # set +x 00:13:22.184 ************************************ 00:13:22.184 START TEST blockdev_xnvme 00:13:22.184 ************************************ 00:13:22.184 15:48:56 blockdev_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:22.442 * Looking for test storage... 00:13:22.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=84730 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:22.442 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 84730 00:13:22.442 15:48:57 blockdev_xnvme -- common/autotest_common.sh@827 -- # '[' -z 84730 ']' 00:13:22.442 15:48:57 blockdev_xnvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.442 15:48:57 blockdev_xnvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:22.442 15:48:57 blockdev_xnvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.442 15:48:57 blockdev_xnvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:22.442 15:48:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.442 [2024-07-20 15:48:57.211185] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:22.442 [2024-07-20 15:48:57.211524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84730 ] 00:13:22.701 [2024-07-20 15:48:57.360953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.701 [2024-07-20 15:48:57.403682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.268 15:48:57 blockdev_xnvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:23.268 15:48:57 blockdev_xnvme -- common/autotest_common.sh@860 -- # return 0 00:13:23.268 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:13:23.268 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:13:23.268 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:23.268 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:23.268 15:48:57 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:23.527 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:23.785 Waiting for block devices as requested 00:13:24.043 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:24.043 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.312 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1666 -- # local nvme bdf 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1c1n1 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1c1n1 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.312 15:49:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.312 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:29.312 nvme0n1 00:13:29.312 nvme0n2 00:13:29.313 nvme0n3 00:13:29.313 nvme1n1 00:13:29.313 nvme2n1 00:13:29.313 nvme3n1 00:13:29.313 15:49:03 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.313 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:13:29.313 15:49:03 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.313 15:49:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.313 15:49:03 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.313 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:13:29.313 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:13:29.313 15:49:03 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.313 15:49:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.313 15:49:03 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.313 15:49:03 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:13:29.313 15:49:03 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.313 15:49:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.313 15:49:04 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.313 15:49:04 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:29.313 15:49:04 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.313 15:49:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.313 15:49:04 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.313 15:49:04 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:13:29.313 15:49:04 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:13:29.313 15:49:04 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:13:29.313 15:49:04 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.313 15:49:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.313 15:49:04 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.571 15:49:04 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:13:29.571 15:49:04 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:13:29.572 15:49:04 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3e026eb0-5696-4a50-98fc-04bc757bf693"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3e026eb0-5696-4a50-98fc-04bc757bf693",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "5bc5fa5e-f7f1-42fd-94c0-ad317f7470d3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5bc5fa5e-f7f1-42fd-94c0-ad317f7470d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "b98a1040-fd8c-4cb3-a5ce-1bea724ce081"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b98a1040-fd8c-4cb3-a5ce-1bea724ce081",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9a9bb57f-eb3b-4fcd-a42d-42216428e61e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9a9bb57f-eb3b-4fcd-a42d-42216428e61e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0fb92f01-c3f8-4db2-bb13-88886c5dfa38"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0fb92f01-c3f8-4db2-bb13-88886c5dfa38",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "4daf4e69-c810-476e-8abc-8be2507b93bc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4daf4e69-c810-476e-8abc-8be2507b93bc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:29.572 15:49:04 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:13:29.572 15:49:04 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:13:29.572 15:49:04 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:13:29.572 15:49:04 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 84730 00:13:29.572 15:49:04 blockdev_xnvme -- common/autotest_common.sh@946 -- # '[' -z 84730 ']' 00:13:29.572 15:49:04 blockdev_xnvme -- common/autotest_common.sh@950 -- # kill -0 84730 00:13:29.572 15:49:04 blockdev_xnvme -- common/autotest_common.sh@951 -- # uname 00:13:29.572 15:49:04 blockdev_xnvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:29.572 15:49:04 blockdev_xnvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84730 00:13:29.572 killing process with pid 84730 00:13:29.572 15:49:04 blockdev_xnvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:29.572 15:49:04 blockdev_xnvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:29.572 15:49:04 blockdev_xnvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84730' 00:13:29.572 15:49:04 blockdev_xnvme -- common/autotest_common.sh@965 -- # kill 84730 00:13:29.572 15:49:04 blockdev_xnvme -- common/autotest_common.sh@970 -- # wait 84730 00:13:29.830 15:49:04 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:29.830 15:49:04 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:29.830 15:49:04 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:29.830 15:49:04 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:29.830 15:49:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.830 ************************************ 00:13:29.830 START TEST bdev_hello_world 00:13:29.830 ************************************ 00:13:29.830 15:49:04 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:30.089 [2024-07-20 15:49:04.639808] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:30.089 [2024-07-20 15:49:04.639925] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84995 ] 00:13:30.089 [2024-07-20 15:49:04.789281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.089 [2024-07-20 15:49:04.830325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.347 [2024-07-20 15:49:05.008260] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:30.347 [2024-07-20 15:49:05.008306] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:30.347 [2024-07-20 15:49:05.008324] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:30.347 [2024-07-20 15:49:05.010497] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:30.347 [2024-07-20 15:49:05.010733] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:30.347 [2024-07-20 15:49:05.010762] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:30.347 [2024-07-20 15:49:05.010996] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:30.347 00:13:30.347 [2024-07-20 15:49:05.011018] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:30.606 00:13:30.606 real 0m0.670s 00:13:30.606 user 0m0.357s 00:13:30.606 sys 0m0.204s 00:13:30.606 15:49:05 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:30.606 15:49:05 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:30.606 ************************************ 00:13:30.606 END TEST bdev_hello_world 00:13:30.606 ************************************ 00:13:30.606 15:49:05 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:13:30.606 15:49:05 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:30.606 15:49:05 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:30.606 15:49:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.606 ************************************ 00:13:30.606 START TEST bdev_bounds 00:13:30.606 ************************************ 00:13:30.606 15:49:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:13:30.606 15:49:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=85026 00:13:30.606 15:49:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:30.606 15:49:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:30.606 15:49:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 85026' 00:13:30.606 Process bdevio pid: 85026 00:13:30.606 15:49:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 85026 00:13:30.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.606 15:49:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 85026 ']' 00:13:30.606 15:49:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.606 15:49:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:30.606 15:49:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.606 15:49:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:30.606 15:49:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:30.606 [2024-07-20 15:49:05.381787] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:30.606 [2024-07-20 15:49:05.381929] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85026 ] 00:13:30.863 [2024-07-20 15:49:05.533539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:30.863 [2024-07-20 15:49:05.576786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.863 [2024-07-20 15:49:05.576994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.863 [2024-07-20 15:49:05.576891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.430 15:49:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:31.430 15:49:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:13:31.430 15:49:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:31.690 I/O targets: 00:13:31.690 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:31.690 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:31.690 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:31.690 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:31.690 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:31.690 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:31.690 00:13:31.690 00:13:31.690 CUnit - A unit testing framework for C - Version 2.1-3 00:13:31.690 http://cunit.sourceforge.net/ 00:13:31.690 00:13:31.690 00:13:31.690 Suite: bdevio tests on: nvme3n1 00:13:31.690 Test: blockdev write read block ...passed 00:13:31.690 Test: blockdev write zeroes read block ...passed 00:13:31.690 Test: blockdev write zeroes read no split ...passed 00:13:31.690 Test: blockdev write zeroes read split ...passed 00:13:31.690 Test: blockdev write zeroes read split partial ...passed 00:13:31.690 Test: blockdev reset ...passed 00:13:31.690 Test: blockdev write read 8 blocks ...passed 00:13:31.690 Test: blockdev write read size > 128k ...passed 00:13:31.690 Test: blockdev write read invalid size ...passed 00:13:31.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.690 Test: blockdev write read max offset ...passed 00:13:31.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.690 Test: blockdev writev readv 8 blocks ...passed 00:13:31.690 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.690 Test: blockdev writev readv block ...passed 00:13:31.690 Test: blockdev writev readv size > 128k ...passed 00:13:31.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.690 Test: blockdev comparev and writev ...passed 00:13:31.690 Test: blockdev nvme passthru rw ...passed 00:13:31.690 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.690 Test: blockdev nvme admin passthru ...passed 00:13:31.690 Test: blockdev copy ...passed 00:13:31.690 Suite: bdevio tests on: nvme2n1 00:13:31.690 Test: blockdev write read block ...passed 00:13:31.690 Test: blockdev write zeroes read block ...passed 00:13:31.690 Test: blockdev write zeroes read no split ...passed 00:13:31.690 Test: blockdev write zeroes read split ...passed 00:13:31.690 Test: blockdev write zeroes read split partial ...passed 00:13:31.690 Test: blockdev reset ...passed 00:13:31.690 Test: blockdev write read 8 blocks ...passed 00:13:31.690 Test: blockdev write read size > 128k ...passed 00:13:31.690 Test: blockdev write read invalid size ...passed 00:13:31.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.690 Test: blockdev write read max offset ...passed 00:13:31.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.690 Test: blockdev writev readv 8 blocks ...passed 00:13:31.690 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.690 Test: blockdev writev readv block ...passed 00:13:31.690 Test: blockdev writev readv size > 128k ...passed 00:13:31.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.690 Test: blockdev comparev and writev ...passed 00:13:31.690 Test: blockdev nvme passthru rw ...passed 00:13:31.690 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.690 Test: blockdev nvme admin passthru ...passed 00:13:31.690 Test: blockdev copy ...passed 00:13:31.690 Suite: bdevio tests on: nvme1n1 00:13:31.690 Test: blockdev write read block ...passed 00:13:31.690 Test: blockdev write zeroes read block ...passed 00:13:31.690 Test: blockdev write zeroes read no split ...passed 00:13:31.690 Test: blockdev write zeroes read split ...passed 00:13:31.690 Test: blockdev write zeroes read split partial ...passed 00:13:31.690 Test: blockdev reset ...passed 00:13:31.690 Test: blockdev write read 8 blocks ...passed 00:13:31.690 Test: blockdev write read size > 128k ...passed 00:13:31.690 Test: blockdev write read invalid size ...passed 00:13:31.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.690 Test: blockdev write read max offset ...passed 00:13:31.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.690 Test: blockdev writev readv 8 blocks ...passed 00:13:31.690 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.690 Test: blockdev writev readv block ...passed 00:13:31.690 Test: blockdev writev readv size > 128k ...passed 00:13:31.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.690 Test: blockdev comparev and writev ...passed 00:13:31.690 Test: blockdev nvme passthru rw ...passed 00:13:31.690 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.690 Test: blockdev nvme admin passthru ...passed 00:13:31.690 Test: blockdev copy ...passed 00:13:31.690 Suite: bdevio tests on: nvme0n3 00:13:31.690 Test: blockdev write read block ...passed 00:13:31.690 Test: blockdev write zeroes read block ...passed 00:13:31.690 Test: blockdev write zeroes read no split ...passed 00:13:31.690 Test: blockdev write zeroes read split ...passed 00:13:31.690 Test: blockdev write zeroes read split partial ...passed 00:13:31.690 Test: blockdev reset ...passed 00:13:31.690 Test: blockdev write read 8 blocks ...passed 00:13:31.690 Test: blockdev write read size > 128k ...passed 00:13:31.690 Test: blockdev write read invalid size ...passed 00:13:31.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.690 Test: blockdev write read max offset ...passed 00:13:31.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.690 Test: blockdev writev readv 8 blocks ...passed 00:13:31.690 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.690 Test: blockdev writev readv block ...passed 00:13:31.690 Test: blockdev writev readv size > 128k ...passed 00:13:31.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.690 Test: blockdev comparev and writev ...passed 00:13:31.690 Test: blockdev nvme passthru rw ...passed 00:13:31.690 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.690 Test: blockdev nvme admin passthru ...passed 00:13:31.690 Test: blockdev copy ...passed 00:13:31.690 Suite: bdevio tests on: nvme0n2 00:13:31.690 Test: blockdev write read block ...passed 00:13:31.690 Test: blockdev write zeroes read block ...passed 00:13:31.690 Test: blockdev write zeroes read no split ...passed 00:13:31.690 Test: blockdev write zeroes read split ...passed 00:13:31.690 Test: blockdev write zeroes read split partial ...passed 00:13:31.690 Test: blockdev reset ...passed 00:13:31.690 Test: blockdev write read 8 blocks ...passed 00:13:31.690 Test: blockdev write read size > 128k ...passed 00:13:31.690 Test: blockdev write read invalid size ...passed 00:13:31.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.690 Test: blockdev write read max offset ...passed 00:13:31.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.690 Test: blockdev writev readv 8 blocks ...passed 00:13:31.690 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.690 Test: blockdev writev readv block ...passed 00:13:31.690 Test: blockdev writev readv size > 128k ...passed 00:13:31.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.690 Test: blockdev comparev and writev ...passed 00:13:31.690 Test: blockdev nvme passthru rw ...passed 00:13:31.690 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.690 Test: blockdev nvme admin passthru ...passed 00:13:31.690 Test: blockdev copy ...passed 00:13:31.690 Suite: bdevio tests on: nvme0n1 00:13:31.690 Test: blockdev write read block ...passed 00:13:31.690 Test: blockdev write zeroes read block ...passed 00:13:31.690 Test: blockdev write zeroes read no split ...passed 00:13:31.690 Test: blockdev write zeroes read split ...passed 00:13:31.690 Test: blockdev write zeroes read split partial ...passed 00:13:31.690 Test: blockdev reset ...passed 00:13:31.690 Test: blockdev write read 8 blocks ...passed 00:13:31.690 Test: blockdev write read size > 128k ...passed 00:13:31.690 Test: blockdev write read invalid size ...passed 00:13:31.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.690 Test: blockdev write read max offset ...passed 00:13:31.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.690 Test: blockdev writev readv 8 blocks ...passed 00:13:31.690 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.690 Test: blockdev writev readv block ...passed 00:13:31.690 Test: blockdev writev readv size > 128k ...passed 00:13:31.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.690 Test: blockdev comparev and writev ...passed 00:13:31.690 Test: blockdev nvme passthru rw ...passed 00:13:31.690 Test: blockdev nvme passthru vendor specific ...passed 00:13:31.690 Test: blockdev nvme admin passthru ...passed 00:13:31.690 Test: blockdev copy ...passed 00:13:31.690 00:13:31.690 Run Summary: Type Total Ran Passed Failed Inactive 00:13:31.691 suites 6 6 n/a 0 0 00:13:31.691 tests 138 138 138 0 0 00:13:31.691 asserts 780 780 780 0 n/a 00:13:31.691 00:13:31.691 Elapsed time = 0.354 seconds 00:13:31.691 0 00:13:31.691 15:49:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 85026 00:13:31.691 15:49:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 85026 ']' 00:13:31.691 15:49:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 85026 00:13:31.691 15:49:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:13:31.691 15:49:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:31.691 15:49:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85026 00:13:31.691 15:49:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:31.691 15:49:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:31.691 15:49:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85026' 00:13:31.691 killing process with pid 85026 00:13:31.691 15:49:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 85026 00:13:31.691 15:49:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 85026 00:13:31.949 15:49:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:13:31.949 00:13:31.949 real 0m1.391s 00:13:31.949 user 0m3.263s 00:13:31.949 sys 0m0.381s 00:13:31.949 15:49:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:31.949 15:49:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:31.949 ************************************ 00:13:31.949 END TEST bdev_bounds 00:13:31.949 ************************************ 00:13:32.219 15:49:06 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:13:32.219 15:49:06 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:32.219 15:49:06 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:32.219 15:49:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.219 ************************************ 00:13:32.219 START TEST bdev_nbd 00:13:32.219 ************************************ 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=85069 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 85069 /var/tmp/spdk-nbd.sock 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 85069 ']' 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:32.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:32.219 15:49:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:32.219 [2024-07-20 15:49:06.861090] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:32.219 [2024-07-20 15:49:06.861427] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.514 [2024-07-20 15:49:07.014853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.514 [2024-07-20 15:49:07.057215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.084 1+0 records in 00:13:33.084 1+0 records out 00:13:33.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704249 s, 5.8 MB/s 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:33.084 15:49:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.343 1+0 records in 00:13:33.343 1+0 records out 00:13:33.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000639999 s, 6.4 MB/s 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:33.343 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.602 1+0 records in 00:13:33.602 1+0 records out 00:13:33.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673694 s, 6.1 MB/s 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:33.602 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.862 1+0 records in 00:13:33.862 1+0 records out 00:13:33.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00097 s, 4.2 MB/s 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:33.862 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.121 1+0 records in 00:13:34.121 1+0 records out 00:13:34.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061633 s, 6.6 MB/s 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:34.121 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:34.380 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:34.380 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:34.380 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.381 1+0 records in 00:13:34.381 1+0 records out 00:13:34.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000924047 s, 4.4 MB/s 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:34.381 15:49:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:34.381 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:34.381 { 00:13:34.381 "nbd_device": "/dev/nbd0", 00:13:34.381 "bdev_name": "nvme0n1" 00:13:34.381 }, 00:13:34.381 { 00:13:34.381 "nbd_device": "/dev/nbd1", 00:13:34.381 "bdev_name": "nvme0n2" 00:13:34.381 }, 00:13:34.381 { 00:13:34.381 "nbd_device": "/dev/nbd2", 00:13:34.381 "bdev_name": "nvme0n3" 00:13:34.381 }, 00:13:34.381 { 00:13:34.381 "nbd_device": "/dev/nbd3", 00:13:34.381 "bdev_name": "nvme1n1" 00:13:34.381 }, 00:13:34.381 { 00:13:34.381 "nbd_device": "/dev/nbd4", 00:13:34.381 "bdev_name": "nvme2n1" 00:13:34.381 }, 00:13:34.381 { 00:13:34.381 "nbd_device": "/dev/nbd5", 00:13:34.381 "bdev_name": "nvme3n1" 00:13:34.381 } 00:13:34.381 ]' 00:13:34.381 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:34.381 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:34.381 { 00:13:34.381 "nbd_device": "/dev/nbd0", 00:13:34.381 "bdev_name": "nvme0n1" 00:13:34.381 }, 00:13:34.381 { 00:13:34.381 "nbd_device": "/dev/nbd1", 00:13:34.381 "bdev_name": "nvme0n2" 00:13:34.381 }, 00:13:34.381 { 00:13:34.381 "nbd_device": "/dev/nbd2", 00:13:34.381 "bdev_name": "nvme0n3" 00:13:34.381 }, 00:13:34.381 { 00:13:34.381 "nbd_device": "/dev/nbd3", 00:13:34.381 "bdev_name": "nvme1n1" 00:13:34.381 }, 00:13:34.381 { 00:13:34.381 "nbd_device": "/dev/nbd4", 00:13:34.381 "bdev_name": "nvme2n1" 00:13:34.381 }, 00:13:34.381 { 00:13:34.381 "nbd_device": "/dev/nbd5", 00:13:34.381 "bdev_name": "nvme3n1" 00:13:34.381 } 00:13:34.381 ]' 00:13:34.381 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.641 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:34.900 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:34.900 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:34.901 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:34.901 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.901 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.901 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:34.901 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:34.901 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.901 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.901 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:35.160 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:35.160 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:35.160 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:35.160 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.160 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.160 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:35.160 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:35.160 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.160 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.160 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:35.420 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:35.420 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:35.420 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:35.420 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.420 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.420 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:35.420 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:35.420 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.420 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.420 15:49:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:35.420 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:35.420 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:35.420 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:35.420 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.420 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.420 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:35.420 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:35.420 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.420 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.420 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:35.679 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:35.679 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:35.680 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:35.680 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.680 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.680 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:35.680 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:35.680 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.680 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:35.680 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.680 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.938 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:35.939 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:35.939 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:35.939 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:35.939 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:35.939 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:35.939 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:35.939 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:36.197 /dev/nbd0 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.197 1+0 records in 00:13:36.197 1+0 records out 00:13:36.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590021 s, 6.9 MB/s 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:36.197 15:49:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:13:36.455 /dev/nbd1 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.455 1+0 records in 00:13:36.455 1+0 records out 00:13:36.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706497 s, 5.8 MB/s 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:13:36.455 /dev/nbd10 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:36.455 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.713 1+0 records in 00:13:36.713 1+0 records out 00:13:36.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671898 s, 6.1 MB/s 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:13:36.713 /dev/nbd11 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.713 1+0 records in 00:13:36.713 1+0 records out 00:13:36.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000808428 s, 5.1 MB/s 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:36.713 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:13:36.971 /dev/nbd12 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.971 1+0 records in 00:13:36.971 1+0 records out 00:13:36.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000739059 s, 5.5 MB/s 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:36.971 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:37.230 /dev/nbd13 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.230 1+0 records in 00:13:37.230 1+0 records out 00:13:37.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679544 s, 6.0 MB/s 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.230 15:49:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:37.489 { 00:13:37.489 "nbd_device": "/dev/nbd0", 00:13:37.489 "bdev_name": "nvme0n1" 00:13:37.489 }, 00:13:37.489 { 00:13:37.489 "nbd_device": "/dev/nbd1", 00:13:37.489 "bdev_name": "nvme0n2" 00:13:37.489 }, 00:13:37.489 { 00:13:37.489 "nbd_device": "/dev/nbd10", 00:13:37.489 "bdev_name": "nvme0n3" 00:13:37.489 }, 00:13:37.489 { 00:13:37.489 "nbd_device": "/dev/nbd11", 00:13:37.489 "bdev_name": "nvme1n1" 00:13:37.489 }, 00:13:37.489 { 00:13:37.489 "nbd_device": "/dev/nbd12", 00:13:37.489 "bdev_name": "nvme2n1" 00:13:37.489 }, 00:13:37.489 { 00:13:37.489 "nbd_device": "/dev/nbd13", 00:13:37.489 "bdev_name": "nvme3n1" 00:13:37.489 } 00:13:37.489 ]' 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:37.489 { 00:13:37.489 "nbd_device": "/dev/nbd0", 00:13:37.489 "bdev_name": "nvme0n1" 00:13:37.489 }, 00:13:37.489 { 00:13:37.489 "nbd_device": "/dev/nbd1", 00:13:37.489 "bdev_name": "nvme0n2" 00:13:37.489 }, 00:13:37.489 { 00:13:37.489 "nbd_device": "/dev/nbd10", 00:13:37.489 "bdev_name": "nvme0n3" 00:13:37.489 }, 00:13:37.489 { 00:13:37.489 "nbd_device": "/dev/nbd11", 00:13:37.489 "bdev_name": "nvme1n1" 00:13:37.489 }, 00:13:37.489 { 00:13:37.489 "nbd_device": "/dev/nbd12", 00:13:37.489 "bdev_name": "nvme2n1" 00:13:37.489 }, 00:13:37.489 { 00:13:37.489 "nbd_device": "/dev/nbd13", 00:13:37.489 "bdev_name": "nvme3n1" 00:13:37.489 } 00:13:37.489 ]' 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:37.489 /dev/nbd1 00:13:37.489 /dev/nbd10 00:13:37.489 /dev/nbd11 00:13:37.489 /dev/nbd12 00:13:37.489 /dev/nbd13' 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:37.489 /dev/nbd1 00:13:37.489 /dev/nbd10 00:13:37.489 /dev/nbd11 00:13:37.489 /dev/nbd12 00:13:37.489 /dev/nbd13' 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:37.489 256+0 records in 00:13:37.489 256+0 records out 00:13:37.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514245 s, 204 MB/s 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.489 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:37.748 256+0 records in 00:13:37.748 256+0 records out 00:13:37.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128765 s, 8.1 MB/s 00:13:37.748 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.748 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:37.748 256+0 records in 00:13:37.748 256+0 records out 00:13:37.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12473 s, 8.4 MB/s 00:13:37.748 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:37.748 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:38.006 256+0 records in 00:13:38.006 256+0 records out 00:13:38.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118579 s, 8.8 MB/s 00:13:38.006 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.006 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:38.006 256+0 records in 00:13:38.006 256+0 records out 00:13:38.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12374 s, 8.5 MB/s 00:13:38.006 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.007 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:38.265 256+0 records in 00:13:38.265 256+0 records out 00:13:38.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14434 s, 7.3 MB/s 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:38.266 256+0 records in 00:13:38.266 256+0 records out 00:13:38.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119276 s, 8.8 MB/s 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.266 15:49:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:38.266 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.266 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:38.266 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:38.266 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:38.266 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:38.266 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:38.266 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:38.266 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:38.266 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.266 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:38.266 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.266 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:38.524 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:38.524 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:38.524 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:38.524 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.524 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.524 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.524 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.524 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.524 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.524 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:38.782 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:38.782 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:38.782 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:38.782 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.782 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.782 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:38.782 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.783 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.783 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.783 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.041 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:39.299 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:39.299 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:39.299 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:39.299 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.299 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.299 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:39.299 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.299 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.299 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.299 15:49:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:39.557 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:39.557 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:39.557 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:39.557 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.557 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.557 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:39.557 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.557 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.557 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:39.557 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.557 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:39.815 malloc_lvol_verify 00:13:39.815 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:40.074 27474a56-5cba-4a99-bd0c-69dd930312d0 00:13:40.074 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:40.332 2d97d9bc-2c0b-46b4-873e-61397349c589 00:13:40.332 15:49:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:40.591 /dev/nbd0 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:40.591 mke2fs 1.46.5 (30-Dec-2021) 00:13:40.591 Discarding device blocks: 0/4096 done 00:13:40.591 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:40.591 00:13:40.591 Allocating group tables: 0/1 done 00:13:40.591 Writing inode tables: 0/1 done 00:13:40.591 Creating journal (1024 blocks): done 00:13:40.591 Writing superblocks and filesystem accounting information: 0/1 done 00:13:40.591 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 85069 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 85069 ']' 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 85069 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:40.591 15:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85069 00:13:40.849 killing process with pid 85069 00:13:40.849 15:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:40.849 15:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:40.849 15:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85069' 00:13:40.849 15:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 85069 00:13:40.849 15:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 85069 00:13:41.107 15:49:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:13:41.107 00:13:41.107 real 0m8.882s 00:13:41.107 user 0m11.656s 00:13:41.107 sys 0m4.030s 00:13:41.107 15:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:41.107 ************************************ 00:13:41.107 END TEST bdev_nbd 00:13:41.107 ************************************ 00:13:41.107 15:49:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:41.107 15:49:15 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:13:41.107 15:49:15 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:13:41.107 15:49:15 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:13:41.107 15:49:15 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:13:41.107 15:49:15 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:41.107 15:49:15 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:41.107 15:49:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:41.107 ************************************ 00:13:41.107 START TEST bdev_fio 00:13:41.107 ************************************ 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:41.107 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n2]' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n2 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n3]' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n3 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:41.107 ************************************ 00:13:41.107 START TEST bdev_fio_rw_verify 00:13:41.107 ************************************ 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:41.107 15:49:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:41.366 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:41.366 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:41.366 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:41.366 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:41.366 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:41.366 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:41.366 fio-3.35 00:13:41.366 Starting 6 threads 00:13:53.564 00:13:53.564 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=85458: Sat Jul 20 15:49:26 2024 00:13:53.564 read: IOPS=33.5k, BW=131MiB/s (137MB/s)(1308MiB/10001msec) 00:13:53.564 slat (usec): min=2, max=299, avg= 6.50, stdev= 2.69 00:13:53.564 clat (usec): min=111, max=2694, avg=591.64, stdev=132.73 00:13:53.564 lat (usec): min=115, max=2703, avg=598.15, stdev=133.35 00:13:53.564 clat percentiles (usec): 00:13:53.564 | 50.000th=[ 619], 99.000th=[ 873], 99.900th=[ 1450], 99.990th=[ 2474], 00:13:53.564 | 99.999th=[ 2671] 00:13:53.564 write: IOPS=33.8k, BW=132MiB/s (139MB/s)(1322MiB/10001msec); 0 zone resets 00:13:53.564 slat (usec): min=11, max=903, avg=17.39, stdev=11.01 00:13:53.564 clat (usec): min=87, max=3070, avg=646.26, stdev=120.21 00:13:53.564 lat (usec): min=105, max=3085, avg=663.65, stdev=120.59 00:13:53.564 clat percentiles (usec): 00:13:53.564 | 50.000th=[ 660], 99.000th=[ 930], 99.900th=[ 1336], 99.990th=[ 2278], 00:13:53.564 | 99.999th=[ 2999] 00:13:53.564 bw ( KiB/s): min=116096, max=147723, per=100.00%, avg=135406.16, stdev=1802.16, samples=114 00:13:53.564 iops : min=29024, max=36930, avg=33851.32, stdev=450.55, samples=114 00:13:53.564 lat (usec) : 100=0.01%, 250=2.47%, 500=8.60%, 750=82.13%, 1000=6.31% 00:13:53.564 lat (msec) : 2=0.46%, 4=0.03% 00:13:53.564 cpu : usr=65.56%, sys=25.89%, ctx=7495, majf=0, minf=29595 00:13:53.564 IO depths : 1=12.3%, 2=24.8%, 4=50.2%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:53.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.564 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.564 issued rwts: total=334949,338503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.564 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:53.564 00:13:53.564 Run status group 0 (all jobs): 00:13:53.564 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=1308MiB (1372MB), run=10001-10001msec 00:13:53.564 WRITE: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=1322MiB (1387MB), run=10001-10001msec 00:13:53.564 ----------------------------------------------------- 00:13:53.564 Suppressions used: 00:13:53.564 count bytes template 00:13:53.565 6 48 /usr/src/fio/parse.c 00:13:53.565 3325 319200 /usr/src/fio/iolog.c 00:13:53.565 1 8 libtcmalloc_minimal.so 00:13:53.565 1 904 libcrypto.so 00:13:53.565 ----------------------------------------------------- 00:13:53.565 00:13:53.565 00:13:53.565 real 0m11.153s 00:13:53.565 user 0m40.062s 00:13:53.565 sys 0m15.896s 00:13:53.565 15:49:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:53.565 15:49:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:53.565 ************************************ 00:13:53.565 END TEST bdev_fio_rw_verify 00:13:53.565 ************************************ 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3e026eb0-5696-4a50-98fc-04bc757bf693"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3e026eb0-5696-4a50-98fc-04bc757bf693",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "5bc5fa5e-f7f1-42fd-94c0-ad317f7470d3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5bc5fa5e-f7f1-42fd-94c0-ad317f7470d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "b98a1040-fd8c-4cb3-a5ce-1bea724ce081"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b98a1040-fd8c-4cb3-a5ce-1bea724ce081",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9a9bb57f-eb3b-4fcd-a42d-42216428e61e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9a9bb57f-eb3b-4fcd-a42d-42216428e61e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0fb92f01-c3f8-4db2-bb13-88886c5dfa38"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0fb92f01-c3f8-4db2-bb13-88886c5dfa38",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "4daf4e69-c810-476e-8abc-8be2507b93bc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4daf4e69-c810-476e-8abc-8be2507b93bc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:13:53.565 /home/vagrant/spdk_repo/spdk 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:13:53.565 00:13:53.565 real 0m11.375s 00:13:53.565 user 0m40.166s 00:13:53.565 sys 0m16.019s 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:53.565 15:49:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:53.565 ************************************ 00:13:53.565 END TEST bdev_fio 00:13:53.565 ************************************ 00:13:53.565 15:49:27 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:53.565 15:49:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:53.565 15:49:27 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:13:53.565 15:49:27 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:53.565 15:49:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.565 ************************************ 00:13:53.565 START TEST bdev_verify 00:13:53.565 ************************************ 00:13:53.565 15:49:27 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:53.565 [2024-07-20 15:49:27.249096] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:53.565 [2024-07-20 15:49:27.249230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85616 ] 00:13:53.565 [2024-07-20 15:49:27.399831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:53.565 [2024-07-20 15:49:27.442326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.565 [2024-07-20 15:49:27.442480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.565 Running I/O for 5 seconds... 00:13:58.830 00:13:58.830 Latency(us) 00:13:58.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.830 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:58.830 Verification LBA range: start 0x0 length 0x80000 00:13:58.830 nvme0n1 : 5.03 1907.69 7.45 0.00 0.00 66992.21 13580.95 58956.08 00:13:58.830 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:58.830 Verification LBA range: start 0x80000 length 0x80000 00:13:58.830 nvme0n1 : 5.06 1870.30 7.31 0.00 0.00 68333.82 10896.35 58534.97 00:13:58.830 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:58.830 Verification LBA range: start 0x0 length 0x80000 00:13:58.830 nvme0n2 : 5.03 1907.15 7.45 0.00 0.00 66906.40 14107.35 63588.34 00:13:58.830 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:58.830 Verification LBA range: start 0x80000 length 0x80000 00:13:58.830 nvme0n2 : 5.05 1875.05 7.32 0.00 0.00 68049.41 10054.12 62325.00 00:13:58.830 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:58.830 Verification LBA range: start 0x0 length 0x80000 00:13:58.830 nvme0n3 : 5.03 1906.68 7.45 0.00 0.00 66820.90 11159.54 67799.49 00:13:58.830 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:58.830 Verification LBA range: start 0x80000 length 0x80000 00:13:58.830 nvme0n3 : 5.06 1872.00 7.31 0.00 0.00 68063.93 12422.89 58956.08 00:13:58.830 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:58.830 Verification LBA range: start 0x0 length 0x20000 00:13:58.830 nvme1n1 : 5.06 1923.95 7.52 0.00 0.00 66109.92 9106.61 64430.57 00:13:58.830 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:58.830 Verification LBA range: start 0x20000 length 0x20000 00:13:58.830 nvme1n1 : 5.07 1869.31 7.30 0.00 0.00 68067.79 12107.05 65272.80 00:13:58.830 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:58.830 Verification LBA range: start 0x0 length 0xbd0bd 00:13:58.830 nvme2n1 : 5.06 2979.91 11.64 0.00 0.00 42584.22 6132.49 53692.14 00:13:58.830 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:58.830 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:58.830 nvme2n1 : 5.07 2952.98 11.54 0.00 0.00 42965.52 5816.65 54323.82 00:13:58.830 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:58.830 Verification LBA range: start 0x0 length 0xa0000 00:13:58.830 nvme3n1 : 5.06 1946.79 7.60 0.00 0.00 64984.91 1342.30 67799.49 00:13:58.830 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:58.830 Verification LBA range: start 0xa0000 length 0xa0000 00:13:58.830 nvme3n1 : 5.07 1892.44 7.39 0.00 0.00 66881.97 9159.25 59377.20 00:13:58.830 =================================================================================================================== 00:13:58.830 Total : 24904.24 97.28 0.00 0.00 61301.82 1342.30 67799.49 00:13:58.830 00:13:58.830 real 0m5.820s 00:13:58.831 user 0m8.475s 00:13:58.831 sys 0m2.193s 00:13:58.831 15:49:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:58.831 15:49:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:58.831 ************************************ 00:13:58.831 END TEST bdev_verify 00:13:58.831 ************************************ 00:13:58.831 15:49:33 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:58.831 15:49:33 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:13:58.831 15:49:33 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:58.831 15:49:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.831 ************************************ 00:13:58.831 START TEST bdev_verify_big_io 00:13:58.831 ************************************ 00:13:58.831 15:49:33 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:58.831 [2024-07-20 15:49:33.136053] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:58.831 [2024-07-20 15:49:33.136206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85709 ] 00:13:58.831 [2024-07-20 15:49:33.287591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:58.831 [2024-07-20 15:49:33.329470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.831 [2024-07-20 15:49:33.329589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.831 Running I/O for 5 seconds... 00:14:05.388 00:14:05.388 Latency(us) 00:14:05.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.388 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:05.388 Verification LBA range: start 0x0 length 0x8000 00:14:05.388 nvme0n1 : 5.60 145.84 9.11 0.00 0.00 849450.60 150759.12 1590129.71 00:14:05.388 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:05.388 Verification LBA range: start 0x8000 length 0x8000 00:14:05.388 nvme0n1 : 5.70 134.76 8.42 0.00 0.00 912067.34 105699.83 1381256.74 00:14:05.388 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:05.388 Verification LBA range: start 0x0 length 0x8000 00:14:05.388 nvme0n2 : 5.68 185.82 11.61 0.00 0.00 646852.76 108647.63 758006.75 00:14:05.388 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:05.388 Verification LBA range: start 0x8000 length 0x8000 00:14:05.388 nvme0n2 : 5.69 199.71 12.48 0.00 0.00 601113.66 88855.24 734424.31 00:14:05.388 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:05.388 Verification LBA range: start 0x0 length 0x8000 00:14:05.388 nvme0n3 : 5.71 142.86 8.93 0.00 0.00 837997.41 108647.63 1684459.44 00:14:05.388 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:05.388 Verification LBA range: start 0x8000 length 0x8000 00:14:05.388 nvme0n3 : 5.71 186.31 11.64 0.00 0.00 649206.25 14739.02 677152.69 00:14:05.388 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:05.388 Verification LBA range: start 0x0 length 0x2000 00:14:05.388 nvme1n1 : 5.71 140.02 8.75 0.00 0.00 835355.81 97277.53 1873118.89 00:14:05.389 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:05.389 Verification LBA range: start 0x2000 length 0x2000 00:14:05.389 nvme1n1 : 5.70 146.46 9.15 0.00 0.00 802645.43 77485.13 1475586.47 00:14:05.389 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:05.389 Verification LBA range: start 0x0 length 0xbd0b 00:14:05.389 nvme2n1 : 5.69 225.04 14.07 0.00 0.00 508612.70 12107.05 576085.13 00:14:05.389 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:05.389 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:05.389 nvme2n1 : 5.70 221.64 13.85 0.00 0.00 518072.39 36847.55 667045.94 00:14:05.389 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:05.389 Verification LBA range: start 0x0 length 0xa000 00:14:05.389 nvme3n1 : 5.72 201.32 12.58 0.00 0.00 558563.62 2355.61 653570.26 00:14:05.389 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:05.389 Verification LBA range: start 0xa000 length 0xa000 00:14:05.389 nvme3n1 : 5.71 165.41 10.34 0.00 0.00 681451.33 8685.49 1354305.39 00:14:05.389 =================================================================================================================== 00:14:05.389 Total : 2095.18 130.95 0.00 0.00 675992.44 2355.61 1873118.89 00:14:05.389 00:14:05.389 real 0m6.499s 00:14:05.389 user 0m11.705s 00:14:05.389 sys 0m0.619s 00:14:05.389 15:49:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:05.389 15:49:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.389 ************************************ 00:14:05.389 END TEST bdev_verify_big_io 00:14:05.389 ************************************ 00:14:05.389 15:49:39 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:05.389 15:49:39 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:05.389 15:49:39 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:05.389 15:49:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.389 ************************************ 00:14:05.389 START TEST bdev_write_zeroes 00:14:05.389 ************************************ 00:14:05.389 15:49:39 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:05.389 [2024-07-20 15:49:39.705088] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:05.389 [2024-07-20 15:49:39.705224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85803 ] 00:14:05.389 [2024-07-20 15:49:39.856454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.389 [2024-07-20 15:49:39.897416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.389 Running I/O for 1 seconds... 00:14:06.767 00:14:06.767 Latency(us) 00:14:06.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.767 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.767 nvme0n1 : 1.02 10945.15 42.75 0.00 0.00 11683.23 6027.21 22845.48 00:14:06.767 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.767 nvme0n2 : 1.02 10921.07 42.66 0.00 0.00 11699.82 6158.80 23056.04 00:14:06.767 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.767 nvme0n3 : 1.03 10988.12 42.92 0.00 0.00 11621.74 7158.95 21687.42 00:14:06.767 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.767 nvme1n1 : 1.03 10969.19 42.85 0.00 0.00 11635.50 7158.95 22213.81 00:14:06.767 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.767 nvme2n1 : 1.02 12954.55 50.60 0.00 0.00 9842.28 4842.82 18213.22 00:14:06.767 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.767 nvme3n1 : 1.03 10950.41 42.78 0.00 0.00 11582.56 6000.89 22529.64 00:14:06.767 =================================================================================================================== 00:14:06.767 Total : 67728.50 264.56 0.00 0.00 11299.92 4842.82 23056.04 00:14:06.767 00:14:06.767 real 0m1.748s 00:14:06.767 user 0m0.966s 00:14:06.767 sys 0m0.590s 00:14:06.767 15:49:41 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:06.767 ************************************ 00:14:06.767 END TEST bdev_write_zeroes 00:14:06.767 ************************************ 00:14:06.767 15:49:41 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:06.767 15:49:41 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:06.767 15:49:41 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:06.767 15:49:41 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:06.767 15:49:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:06.767 ************************************ 00:14:06.767 START TEST bdev_json_nonenclosed 00:14:06.767 ************************************ 00:14:06.767 15:49:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:06.767 [2024-07-20 15:49:41.522528] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:06.767 [2024-07-20 15:49:41.522653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85841 ] 00:14:07.026 [2024-07-20 15:49:41.673747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.026 [2024-07-20 15:49:41.715610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.026 [2024-07-20 15:49:41.715692] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:07.026 [2024-07-20 15:49:41.715722] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:07.026 [2024-07-20 15:49:41.715735] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:07.026 00:14:07.026 real 0m0.373s 00:14:07.026 user 0m0.155s 00:14:07.026 sys 0m0.115s 00:14:07.026 ************************************ 00:14:07.026 END TEST bdev_json_nonenclosed 00:14:07.026 ************************************ 00:14:07.026 15:49:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:07.026 15:49:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:07.285 15:49:41 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:07.285 15:49:41 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:14:07.285 15:49:41 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:07.285 15:49:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.285 ************************************ 00:14:07.285 START TEST bdev_json_nonarray 00:14:07.285 ************************************ 00:14:07.285 15:49:41 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:07.285 [2024-07-20 15:49:41.965013] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:07.285 [2024-07-20 15:49:41.965154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85866 ] 00:14:07.543 [2024-07-20 15:49:42.113578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.543 [2024-07-20 15:49:42.154786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.543 [2024-07-20 15:49:42.154886] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:07.543 [2024-07-20 15:49:42.154913] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:07.544 [2024-07-20 15:49:42.154926] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:07.544 00:14:07.544 real 0m0.380s 00:14:07.544 user 0m0.159s 00:14:07.544 sys 0m0.117s 00:14:07.544 15:49:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:07.544 15:49:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:07.544 ************************************ 00:14:07.544 END TEST bdev_json_nonarray 00:14:07.544 ************************************ 00:14:07.544 15:49:42 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:14:07.544 15:49:42 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:14:07.544 15:49:42 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:14:07.544 15:49:42 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:14:07.544 15:49:42 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:14:07.544 15:49:42 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:07.544 15:49:42 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:07.544 15:49:42 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:07.544 15:49:42 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:07.544 15:49:42 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:07.544 15:49:42 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:07.544 15:49:42 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:08.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:09.412 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.311 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.311 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.311 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:11.311 00:14:11.311 real 0m48.903s 00:14:11.311 user 1m25.441s 00:14:11.311 sys 0m30.974s 00:14:11.311 15:49:45 blockdev_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:11.311 15:49:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.311 ************************************ 00:14:11.311 END TEST blockdev_xnvme 00:14:11.311 ************************************ 00:14:11.311 15:49:45 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:11.311 15:49:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:11.311 15:49:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:11.311 15:49:45 -- common/autotest_common.sh@10 -- # set +x 00:14:11.311 ************************************ 00:14:11.311 START TEST ublk 00:14:11.311 ************************************ 00:14:11.311 15:49:45 ublk -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:11.311 * Looking for test storage... 00:14:11.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:11.311 15:49:46 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:11.311 15:49:46 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:11.311 15:49:46 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:11.311 15:49:46 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:11.311 15:49:46 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:11.311 15:49:46 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:11.311 15:49:46 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:11.311 15:49:46 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:11.311 15:49:46 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:11.311 15:49:46 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:11.311 15:49:46 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:11.311 15:49:46 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:11.311 15:49:46 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:11.311 15:49:46 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:11.311 15:49:46 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:11.311 15:49:46 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:11.311 15:49:46 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:11.311 15:49:46 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:11.311 15:49:46 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:11.311 15:49:46 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:11.311 15:49:46 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:11.311 15:49:46 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:11.311 15:49:46 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.311 ************************************ 00:14:11.311 START TEST test_save_ublk_config 00:14:11.311 ************************************ 00:14:11.311 15:49:46 ublk.test_save_ublk_config -- common/autotest_common.sh@1121 -- # test_save_config 00:14:11.311 15:49:46 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:11.311 15:49:46 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86152 00:14:11.311 15:49:46 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:11.311 15:49:46 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:11.311 15:49:46 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86152 00:14:11.570 15:49:46 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 86152 ']' 00:14:11.570 15:49:46 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.570 15:49:46 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:11.570 15:49:46 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.570 15:49:46 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:11.570 15:49:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:11.570 [2024-07-20 15:49:46.201696] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:11.570 [2024-07-20 15:49:46.201835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86152 ] 00:14:11.570 [2024-07-20 15:49:46.351198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.829 [2024-07-20 15:49:46.414827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.396 15:49:46 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:12.396 15:49:46 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:14:12.396 15:49:46 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:12.396 15:49:46 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:12.396 15:49:46 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.396 15:49:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:12.396 [2024-07-20 15:49:46.978400] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:12.396 [2024-07-20 15:49:46.978685] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:12.396 malloc0 00:14:12.396 [2024-07-20 15:49:47.010500] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:12.396 [2024-07-20 15:49:47.010612] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:12.396 [2024-07-20 15:49:47.010635] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:12.396 [2024-07-20 15:49:47.010644] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:12.396 [2024-07-20 15:49:47.019464] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:12.396 [2024-07-20 15:49:47.019490] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:12.396 [2024-07-20 15:49:47.026389] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:12.396 [2024-07-20 15:49:47.026486] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:12.396 [2024-07-20 15:49:47.043383] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:12.396 0 00:14:12.396 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.396 15:49:47 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:12.396 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.396 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:12.656 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.656 15:49:47 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:12.656 "subsystems": [ 00:14:12.656 { 00:14:12.656 "subsystem": "keyring", 00:14:12.656 "config": [] 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "subsystem": "iobuf", 00:14:12.656 "config": [ 00:14:12.656 { 00:14:12.656 "method": "iobuf_set_options", 00:14:12.656 "params": { 00:14:12.656 "small_pool_count": 8192, 00:14:12.656 "large_pool_count": 1024, 00:14:12.656 "small_bufsize": 8192, 00:14:12.656 "large_bufsize": 135168 00:14:12.656 } 00:14:12.656 } 00:14:12.656 ] 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "subsystem": "sock", 00:14:12.656 "config": [ 00:14:12.656 { 00:14:12.656 "method": "sock_set_default_impl", 00:14:12.656 "params": { 00:14:12.656 "impl_name": "posix" 00:14:12.656 } 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "method": "sock_impl_set_options", 00:14:12.656 "params": { 00:14:12.656 "impl_name": "ssl", 00:14:12.656 "recv_buf_size": 4096, 00:14:12.656 "send_buf_size": 4096, 00:14:12.656 "enable_recv_pipe": true, 00:14:12.656 "enable_quickack": false, 00:14:12.656 "enable_placement_id": 0, 00:14:12.656 "enable_zerocopy_send_server": true, 00:14:12.656 "enable_zerocopy_send_client": false, 00:14:12.656 "zerocopy_threshold": 0, 00:14:12.656 "tls_version": 0, 00:14:12.656 "enable_ktls": false 00:14:12.656 } 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "method": "sock_impl_set_options", 00:14:12.656 "params": { 00:14:12.656 "impl_name": "posix", 00:14:12.656 "recv_buf_size": 2097152, 00:14:12.656 "send_buf_size": 2097152, 00:14:12.656 "enable_recv_pipe": true, 00:14:12.656 "enable_quickack": false, 00:14:12.656 "enable_placement_id": 0, 00:14:12.656 "enable_zerocopy_send_server": true, 00:14:12.656 "enable_zerocopy_send_client": false, 00:14:12.656 "zerocopy_threshold": 0, 00:14:12.656 "tls_version": 0, 00:14:12.656 "enable_ktls": false 00:14:12.656 } 00:14:12.656 } 00:14:12.656 ] 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "subsystem": "vmd", 00:14:12.656 "config": [] 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "subsystem": "accel", 00:14:12.656 "config": [ 00:14:12.656 { 00:14:12.656 "method": "accel_set_options", 00:14:12.656 "params": { 00:14:12.656 "small_cache_size": 128, 00:14:12.656 "large_cache_size": 16, 00:14:12.656 "task_count": 2048, 00:14:12.656 "sequence_count": 2048, 00:14:12.656 "buf_count": 2048 00:14:12.656 } 00:14:12.656 } 00:14:12.656 ] 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "subsystem": "bdev", 00:14:12.656 "config": [ 00:14:12.656 { 00:14:12.656 "method": "bdev_set_options", 00:14:12.656 "params": { 00:14:12.656 "bdev_io_pool_size": 65535, 00:14:12.656 "bdev_io_cache_size": 256, 00:14:12.656 "bdev_auto_examine": true, 00:14:12.656 "iobuf_small_cache_size": 128, 00:14:12.656 "iobuf_large_cache_size": 16 00:14:12.656 } 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "method": "bdev_raid_set_options", 00:14:12.656 "params": { 00:14:12.656 "process_window_size_kb": 1024 00:14:12.656 } 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "method": "bdev_iscsi_set_options", 00:14:12.656 "params": { 00:14:12.656 "timeout_sec": 30 00:14:12.656 } 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "method": "bdev_nvme_set_options", 00:14:12.656 "params": { 00:14:12.656 "action_on_timeout": "none", 00:14:12.656 "timeout_us": 0, 00:14:12.656 "timeout_admin_us": 0, 00:14:12.656 "keep_alive_timeout_ms": 10000, 00:14:12.656 "arbitration_burst": 0, 00:14:12.656 "low_priority_weight": 0, 00:14:12.656 "medium_priority_weight": 0, 00:14:12.656 "high_priority_weight": 0, 00:14:12.656 "nvme_adminq_poll_period_us": 10000, 00:14:12.656 "nvme_ioq_poll_period_us": 0, 00:14:12.656 "io_queue_requests": 0, 00:14:12.656 "delay_cmd_submit": true, 00:14:12.656 "transport_retry_count": 4, 00:14:12.656 "bdev_retry_count": 3, 00:14:12.656 "transport_ack_timeout": 0, 00:14:12.656 "ctrlr_loss_timeout_sec": 0, 00:14:12.656 "reconnect_delay_sec": 0, 00:14:12.656 "fast_io_fail_timeout_sec": 0, 00:14:12.656 "disable_auto_failback": false, 00:14:12.656 "generate_uuids": false, 00:14:12.656 "transport_tos": 0, 00:14:12.656 "nvme_error_stat": false, 00:14:12.656 "rdma_srq_size": 0, 00:14:12.656 "io_path_stat": false, 00:14:12.656 "allow_accel_sequence": false, 00:14:12.656 "rdma_max_cq_size": 0, 00:14:12.656 "rdma_cm_event_timeout_ms": 0, 00:14:12.656 "dhchap_digests": [ 00:14:12.656 "sha256", 00:14:12.656 "sha384", 00:14:12.656 "sha512" 00:14:12.656 ], 00:14:12.656 "dhchap_dhgroups": [ 00:14:12.656 "null", 00:14:12.656 "ffdhe2048", 00:14:12.656 "ffdhe3072", 00:14:12.656 "ffdhe4096", 00:14:12.656 "ffdhe6144", 00:14:12.656 "ffdhe8192" 00:14:12.656 ] 00:14:12.656 } 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "method": "bdev_nvme_set_hotplug", 00:14:12.656 "params": { 00:14:12.656 "period_us": 100000, 00:14:12.656 "enable": false 00:14:12.656 } 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "method": "bdev_malloc_create", 00:14:12.656 "params": { 00:14:12.656 "name": "malloc0", 00:14:12.656 "num_blocks": 8192, 00:14:12.656 "block_size": 4096, 00:14:12.656 "physical_block_size": 4096, 00:14:12.656 "uuid": "3b577276-0450-412e-b108-0275c3bd15e0", 00:14:12.656 "optimal_io_boundary": 0 00:14:12.656 } 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "method": "bdev_wait_for_examine" 00:14:12.656 } 00:14:12.656 ] 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "subsystem": "scsi", 00:14:12.656 "config": null 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "subsystem": "scheduler", 00:14:12.656 "config": [ 00:14:12.656 { 00:14:12.656 "method": "framework_set_scheduler", 00:14:12.656 "params": { 00:14:12.656 "name": "static" 00:14:12.656 } 00:14:12.656 } 00:14:12.656 ] 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "subsystem": "vhost_scsi", 00:14:12.656 "config": [] 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "subsystem": "vhost_blk", 00:14:12.656 "config": [] 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "subsystem": "ublk", 00:14:12.656 "config": [ 00:14:12.656 { 00:14:12.656 "method": "ublk_create_target", 00:14:12.656 "params": { 00:14:12.656 "cpumask": "1" 00:14:12.656 } 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "method": "ublk_start_disk", 00:14:12.656 "params": { 00:14:12.656 "bdev_name": "malloc0", 00:14:12.656 "ublk_id": 0, 00:14:12.656 "num_queues": 1, 00:14:12.656 "queue_depth": 128 00:14:12.656 } 00:14:12.656 } 00:14:12.656 ] 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "subsystem": "nbd", 00:14:12.656 "config": [] 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "subsystem": "nvmf", 00:14:12.656 "config": [ 00:14:12.656 { 00:14:12.656 "method": "nvmf_set_config", 00:14:12.656 "params": { 00:14:12.656 "discovery_filter": "match_any", 00:14:12.656 "admin_cmd_passthru": { 00:14:12.656 "identify_ctrlr": false 00:14:12.656 } 00:14:12.656 } 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "method": "nvmf_set_max_subsystems", 00:14:12.656 "params": { 00:14:12.656 "max_subsystems": 1024 00:14:12.656 } 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "method": "nvmf_set_crdt", 00:14:12.656 "params": { 00:14:12.656 "crdt1": 0, 00:14:12.656 "crdt2": 0, 00:14:12.656 "crdt3": 0 00:14:12.656 } 00:14:12.656 } 00:14:12.656 ] 00:14:12.656 }, 00:14:12.656 { 00:14:12.656 "subsystem": "iscsi", 00:14:12.656 "config": [ 00:14:12.656 { 00:14:12.656 "method": "iscsi_set_options", 00:14:12.656 "params": { 00:14:12.656 "node_base": "iqn.2016-06.io.spdk", 00:14:12.656 "max_sessions": 128, 00:14:12.656 "max_connections_per_session": 2, 00:14:12.656 "max_queue_depth": 64, 00:14:12.656 "default_time2wait": 2, 00:14:12.656 "default_time2retain": 20, 00:14:12.656 "first_burst_length": 8192, 00:14:12.656 "immediate_data": true, 00:14:12.656 "allow_duplicated_isid": false, 00:14:12.656 "error_recovery_level": 0, 00:14:12.656 "nop_timeout": 60, 00:14:12.656 "nop_in_interval": 30, 00:14:12.657 "disable_chap": false, 00:14:12.657 "require_chap": false, 00:14:12.657 "mutual_chap": false, 00:14:12.657 "chap_group": 0, 00:14:12.657 "max_large_datain_per_connection": 64, 00:14:12.657 "max_r2t_per_connection": 4, 00:14:12.657 "pdu_pool_size": 36864, 00:14:12.657 "immediate_data_pool_size": 16384, 00:14:12.657 "data_out_pool_size": 2048 00:14:12.657 } 00:14:12.657 } 00:14:12.657 ] 00:14:12.657 } 00:14:12.657 ] 00:14:12.657 }' 00:14:12.657 15:49:47 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86152 00:14:12.657 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 86152 ']' 00:14:12.657 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 86152 00:14:12.657 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:14:12.657 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:12.657 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86152 00:14:12.657 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:12.657 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:12.657 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86152' 00:14:12.657 killing process with pid 86152 00:14:12.657 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 86152 00:14:12.657 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 86152 00:14:12.916 [2024-07-20 15:49:47.643085] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:12.916 [2024-07-20 15:49:47.676408] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:12.916 [2024-07-20 15:49:47.676576] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:12.916 [2024-07-20 15:49:47.684388] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:12.916 [2024-07-20 15:49:47.684441] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:12.916 [2024-07-20 15:49:47.684453] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:12.916 [2024-07-20 15:49:47.684481] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:12.916 [2024-07-20 15:49:47.684629] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:13.174 15:49:47 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86189 00:14:13.174 15:49:47 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86189 00:14:13.174 15:49:47 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:13.174 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 86189 ']' 00:14:13.174 15:49:47 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:13.174 "subsystems": [ 00:14:13.174 { 00:14:13.174 "subsystem": "keyring", 00:14:13.174 "config": [] 00:14:13.174 }, 00:14:13.174 { 00:14:13.174 "subsystem": "iobuf", 00:14:13.174 "config": [ 00:14:13.174 { 00:14:13.174 "method": "iobuf_set_options", 00:14:13.174 "params": { 00:14:13.174 "small_pool_count": 8192, 00:14:13.175 "large_pool_count": 1024, 00:14:13.175 "small_bufsize": 8192, 00:14:13.175 "large_bufsize": 135168 00:14:13.175 } 00:14:13.175 } 00:14:13.175 ] 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "subsystem": "sock", 00:14:13.175 "config": [ 00:14:13.175 { 00:14:13.175 "method": "sock_set_default_impl", 00:14:13.175 "params": { 00:14:13.175 "impl_name": "posix" 00:14:13.175 } 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "method": "sock_impl_set_options", 00:14:13.175 "params": { 00:14:13.175 "impl_name": "ssl", 00:14:13.175 "recv_buf_size": 4096, 00:14:13.175 "send_buf_size": 4096, 00:14:13.175 "enable_recv_pipe": true, 00:14:13.175 "enable_quickack": false, 00:14:13.175 "enable_placement_id": 0, 00:14:13.175 "enable_zerocopy_send_server": true, 00:14:13.175 "enable_zerocopy_send_client": false, 00:14:13.175 "zerocopy_threshold": 0, 00:14:13.175 "tls_version": 0, 00:14:13.175 "enable_ktls": false 00:14:13.175 } 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "method": "sock_impl_set_options", 00:14:13.175 "params": { 00:14:13.175 "impl_name": "posix", 00:14:13.175 "recv_buf_size": 2097152, 00:14:13.175 "send_buf_size": 2097152, 00:14:13.175 "enable_recv_pipe": true, 00:14:13.175 "enable_quickack": false, 00:14:13.175 "enable_placement_id": 0, 00:14:13.175 "enable_zerocopy_send_server": true, 00:14:13.175 "enable_zerocopy_send_client": false, 00:14:13.175 "zerocopy_threshold": 0, 00:14:13.175 "tls_version": 0, 00:14:13.175 "enable_ktls": false 00:14:13.175 } 00:14:13.175 } 00:14:13.175 ] 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "subsystem": "vmd", 00:14:13.175 "config": [] 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "subsystem": "accel", 00:14:13.175 "config": [ 00:14:13.175 { 00:14:13.175 "method": "accel_set_options", 00:14:13.175 "params": { 00:14:13.175 "small_cache_size": 128, 00:14:13.175 "large_cache_size": 16, 00:14:13.175 "task_count": 2048, 00:14:13.175 "sequence_count": 2048, 00:14:13.175 "buf_count": 2048 00:14:13.175 } 00:14:13.175 } 00:14:13.175 ] 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "subsystem": "bdev", 00:14:13.175 "config": [ 00:14:13.175 { 00:14:13.175 "method": "bdev_set_options", 00:14:13.175 "params": { 00:14:13.175 "bdev_io_pool_size": 65535, 00:14:13.175 "bdev_io_cache_size": 256, 00:14:13.175 "bdev_auto_examine": true, 00:14:13.175 "iobuf_small_cache_size": 128, 00:14:13.175 "iobuf_large_cache_size": 16 00:14:13.175 } 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "method": "bdev_raid_set_options", 00:14:13.175 "params": { 00:14:13.175 "process_window_size_kb": 1024 00:14:13.175 } 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "method": "bdev_iscsi_set_options", 00:14:13.175 "params": { 00:14:13.175 "timeout_sec": 30 00:14:13.175 } 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "method": "bdev_nvme_set_options", 00:14:13.175 "params": { 00:14:13.175 "action_on_timeout": "none", 00:14:13.175 "timeout_us": 0, 00:14:13.175 "timeout_admin_us": 0, 00:14:13.175 "keep_alive_timeout_ms": 10000, 00:14:13.175 "arbitration_burst": 0, 00:14:13.175 "low_priority_weight": 0, 00:14:13.175 "medium_priority_weight": 0, 00:14:13.175 "high_priority_weight": 0, 00:14:13.175 "nvme_adminq_poll_period_us": 10000, 00:14:13.175 "nvme_ioq_poll_period_us": 0, 00:14:13.175 "io_queue_requests": 0, 00:14:13.175 "delay_cmd_submit": true, 00:14:13.175 "transport_retry_count": 4, 00:14:13.175 "bdev_retry_count": 3, 00:14:13.175 "transport_ack_timeout": 0, 00:14:13.175 "ctrlr_loss_timeout_sec": 0, 00:14:13.175 "reconnect_delay_sec": 0, 00:14:13.175 "fast_io_fail_timeout_sec": 0, 00:14:13.175 "disable_auto_failback": false, 00:14:13.175 "generate_uuids": false, 00:14:13.175 "transport_tos": 0, 00:14:13.175 "nvme_error_stat": false, 00:14:13.175 "rdma_srq_size": 0, 00:14:13.175 "io_path_stat": false, 00:14:13.175 "allow_accel_sequence": false, 00:14:13.175 "rdma_max_cq_size": 0, 00:14:13.175 "rdma_cm_event_timeout_ms": 0, 00:14:13.175 "dhchap_digests": [ 00:14:13.175 "sha256", 00:14:13.175 "sha384", 00:14:13.175 "sha512" 00:14:13.175 ], 00:14:13.175 "dhchap_dhgroups": [ 00:14:13.175 "null", 00:14:13.175 "ffdhe2048", 00:14:13.175 "ffdhe3072", 00:14:13.175 "ffdhe4096", 00:14:13.175 "ffdhe6144", 00:14:13.175 "ffdhe8192" 00:14:13.175 ] 00:14:13.175 } 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "method": "bdev_nvme_set_hotplug", 00:14:13.175 "params": { 00:14:13.175 "period_us": 100000, 00:14:13.175 "enable": false 00:14:13.175 } 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "method": "bdev_malloc_create", 00:14:13.175 "params": { 00:14:13.175 "name": "malloc0", 00:14:13.175 "num_blocks": 8192, 00:14:13.175 "block_size": 4096, 00:14:13.175 "physical_block_size": 4096, 00:14:13.175 "uuid": "3b577276-0450-412e-b108-0275c3bd15e0", 00:14:13.175 "optimal_io_boundary": 0 00:14:13.175 } 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "method": "bdev_wait_for_examine" 00:14:13.175 } 00:14:13.175 ] 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "subsystem": "scsi", 00:14:13.175 "config": null 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "subsystem": "scheduler", 00:14:13.175 "config": [ 00:14:13.175 { 00:14:13.175 "method": "framework_set_scheduler", 00:14:13.175 "params": { 00:14:13.175 "name": "static" 00:14:13.175 } 00:14:13.175 } 00:14:13.175 ] 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "subsystem": "vhost_scsi", 00:14:13.175 "config": [] 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "subsystem": "vhost_blk", 00:14:13.175 "config": [] 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "subsystem": "ublk", 00:14:13.175 "config": [ 00:14:13.175 { 00:14:13.175 "method": "ublk_create_target", 00:14:13.175 "params": { 00:14:13.175 "cpumask": "1" 00:14:13.175 } 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "method": "ublk_start_disk", 00:14:13.175 "params": { 00:14:13.175 "bdev_name": "malloc0", 00:14:13.175 "ublk_id": 0, 00:14:13.175 "num_queues": 1, 00:14:13.175 "queue_depth": 128 00:14:13.175 } 00:14:13.175 } 00:14:13.175 ] 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "subsystem": "nbd", 00:14:13.175 "config": [] 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "subsystem": "nvmf", 00:14:13.175 "config": [ 00:14:13.175 { 00:14:13.175 "method": "nvmf_set_config", 00:14:13.175 "params": { 00:14:13.175 "discovery_filter": "match_any", 00:14:13.175 "admin_cmd_passthru": { 00:14:13.175 "identify_ctrlr": false 00:14:13.175 } 00:14:13.175 } 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "method": "nvmf_set_max_subsystems", 00:14:13.175 "params": { 00:14:13.175 "max_subsystems": 1024 00:14:13.175 } 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "method": "nvmf_set_crdt", 00:14:13.175 "params": { 00:14:13.175 "crdt1": 0, 00:14:13.175 "crdt2": 0, 00:14:13.175 "crdt3": 0 00:14:13.175 } 00:14:13.175 } 00:14:13.175 ] 00:14:13.175 }, 00:14:13.175 { 00:14:13.175 "subsystem": "iscsi", 00:14:13.175 "config": [ 00:14:13.175 { 00:14:13.175 "method": "iscsi_set_options", 00:14:13.175 "params": { 00:14:13.175 "node_base": "iqn.2016-06.io.spdk", 00:14:13.175 "max_sessions": 128, 00:14:13.175 "max_connections_per_session": 2, 00:14:13.175 "max_queue_depth": 64, 00:14:13.175 "default_time2wait": 2, 00:14:13.175 "default_time2retain": 20, 00:14:13.175 "first_burst_length": 8192, 00:14:13.175 "immediate_data": true, 00:14:13.175 "allow_duplicated_isid": false, 00:14:13.175 "error_recovery_level": 0, 00:14:13.175 "nop_timeout": 60, 00:14:13.175 "nop_in_interval": 30, 00:14:13.175 "disable_chap": false, 00:14:13.175 "require_chap": false, 00:14:13.175 "mutual_chap": false, 00:14:13.175 "chap_group": 0, 00:14:13.175 "max_large_datain_per_connection": 64, 00:14:13.175 "max_r2t_per_connection": 4, 00:14:13.175 "pdu_pool_size": 36864, 00:14:13.175 "immediate_data_pool_size": 16384, 00:14:13.175 "data_out_pool_size": 2048 00:14:13.175 } 00:14:13.175 } 00:14:13.175 ] 00:14:13.175 } 00:14:13.175 ] 00:14:13.175 }' 00:14:13.175 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.176 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:13.176 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.176 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:13.176 15:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:13.434 [2024-07-20 15:49:48.018205] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:13.435 [2024-07-20 15:49:48.018370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86189 ] 00:14:13.435 [2024-07-20 15:49:48.166629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.435 [2024-07-20 15:49:48.225313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.003 [2024-07-20 15:49:48.563372] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:14.003 [2024-07-20 15:49:48.563667] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:14.003 [2024-07-20 15:49:48.571520] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:14.003 [2024-07-20 15:49:48.571619] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:14.003 [2024-07-20 15:49:48.571631] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:14.003 [2024-07-20 15:49:48.571639] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:14.003 [2024-07-20 15:49:48.580441] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:14.003 [2024-07-20 15:49:48.580466] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:14.003 [2024-07-20 15:49:48.587385] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:14.003 [2024-07-20 15:49:48.587481] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:14.003 [2024-07-20 15:49:48.604379] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86189 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 86189 ']' 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 86189 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86189 00:14:14.262 killing process with pid 86189 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86189' 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 86189 00:14:14.262 15:49:48 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 86189 00:14:14.521 [2024-07-20 15:49:49.155777] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:14.521 [2024-07-20 15:49:49.192389] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:14.521 [2024-07-20 15:49:49.192555] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:14.521 [2024-07-20 15:49:49.203404] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:14.521 [2024-07-20 15:49:49.203474] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:14.521 [2024-07-20 15:49:49.203483] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:14.522 [2024-07-20 15:49:49.203511] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:14.522 [2024-07-20 15:49:49.207516] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:14.780 15:49:49 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:14.780 00:14:14.780 real 0m3.375s 00:14:14.780 user 0m2.414s 00:14:14.780 sys 0m1.578s 00:14:14.780 15:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:14.780 15:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:14.780 ************************************ 00:14:14.780 END TEST test_save_ublk_config 00:14:14.780 ************************************ 00:14:14.780 15:49:49 ublk -- ublk/ublk.sh@139 -- # spdk_pid=86237 00:14:14.780 15:49:49 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:14.780 15:49:49 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.781 15:49:49 ublk -- ublk/ublk.sh@141 -- # waitforlisten 86237 00:14:14.781 15:49:49 ublk -- common/autotest_common.sh@827 -- # '[' -z 86237 ']' 00:14:14.781 15:49:49 ublk -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.781 15:49:49 ublk -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:14.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.781 15:49:49 ublk -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.781 15:49:49 ublk -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:14.781 15:49:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.040 [2024-07-20 15:49:49.627656] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:15.040 [2024-07-20 15:49:49.627778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86237 ] 00:14:15.040 [2024-07-20 15:49:49.776996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:15.040 [2024-07-20 15:49:49.818816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.040 [2024-07-20 15:49:49.818924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.607 15:49:50 ublk -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:15.607 15:49:50 ublk -- common/autotest_common.sh@860 -- # return 0 00:14:15.607 15:49:50 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:15.607 15:49:50 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:15.607 15:49:50 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:15.607 15:49:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.866 ************************************ 00:14:15.866 START TEST test_create_ublk 00:14:15.866 ************************************ 00:14:15.866 15:49:50 ublk.test_create_ublk -- common/autotest_common.sh@1121 -- # test_create_ublk 00:14:15.866 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:15.866 15:49:50 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.866 15:49:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.866 [2024-07-20 15:49:50.419381] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:15.866 [2024-07-20 15:49:50.420725] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:15.866 15:49:50 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.866 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:15.866 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:15.866 15:49:50 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.866 15:49:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.866 15:49:50 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.866 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:15.866 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:15.866 15:49:50 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.866 15:49:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.866 [2024-07-20 15:49:50.506520] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:15.866 [2024-07-20 15:49:50.506970] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:15.866 [2024-07-20 15:49:50.506993] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:15.866 [2024-07-20 15:49:50.507002] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:15.866 [2024-07-20 15:49:50.514403] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:15.866 [2024-07-20 15:49:50.514427] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:15.866 [2024-07-20 15:49:50.522402] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:15.866 [2024-07-20 15:49:50.530426] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:15.866 [2024-07-20 15:49:50.558398] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:15.866 15:49:50 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.866 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:15.866 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:15.866 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:15.866 15:49:50 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.866 15:49:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.866 15:49:50 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.866 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:15.866 { 00:14:15.866 "ublk_device": "/dev/ublkb0", 00:14:15.866 "id": 0, 00:14:15.866 "queue_depth": 512, 00:14:15.866 "num_queues": 4, 00:14:15.866 "bdev_name": "Malloc0" 00:14:15.866 } 00:14:15.866 ]' 00:14:15.866 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:15.866 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:15.866 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:16.125 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:16.125 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:16.125 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:16.125 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:16.125 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:16.125 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:16.125 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:16.125 15:49:50 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:16.125 15:49:50 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:16.125 15:49:50 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:16.125 15:49:50 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:16.125 15:49:50 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:16.125 15:49:50 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:16.125 15:49:50 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:16.125 15:49:50 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:16.125 15:49:50 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:16.125 15:49:50 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:16.125 15:49:50 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:16.125 15:49:50 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:16.125 fio: verification read phase will never start because write phase uses all of runtime 00:14:16.125 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:16.125 fio-3.35 00:14:16.125 Starting 1 process 00:14:28.325 00:14:28.325 fio_test: (groupid=0, jobs=1): err= 0: pid=86282: Sat Jul 20 15:50:01 2024 00:14:28.325 write: IOPS=16.5k, BW=64.5MiB/s (67.7MB/s)(645MiB/10001msec); 0 zone resets 00:14:28.325 clat (usec): min=37, max=4128, avg=59.70, stdev=109.46 00:14:28.325 lat (usec): min=37, max=4155, avg=60.16, stdev=109.48 00:14:28.325 clat percentiles (usec): 00:14:28.325 | 1.00th=[ 40], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 53], 00:14:28.325 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 55], 60.00th=[ 56], 00:14:28.325 | 70.00th=[ 57], 80.00th=[ 58], 90.00th=[ 60], 95.00th=[ 63], 00:14:28.325 | 99.00th=[ 75], 99.50th=[ 81], 99.90th=[ 2311], 99.95th=[ 3064], 00:14:28.325 | 99.99th=[ 3818] 00:14:28.325 bw ( KiB/s): min=61936, max=71328, per=100.00%, avg=66116.63, stdev=2059.16, samples=19 00:14:28.325 iops : min=15484, max=17832, avg=16529.16, stdev=514.79, samples=19 00:14:28.325 lat (usec) : 50=3.87%, 100=95.87%, 250=0.04%, 500=0.01%, 750=0.01% 00:14:28.325 lat (usec) : 1000=0.01% 00:14:28.325 lat (msec) : 2=0.06%, 4=0.12%, 10=0.01% 00:14:28.325 cpu : usr=2.77%, sys=8.83%, ctx=165231, majf=0, minf=795 00:14:28.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:28.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.325 issued rwts: total=0,165226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:28.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:28.325 00:14:28.325 Run status group 0 (all jobs): 00:14:28.325 WRITE: bw=64.5MiB/s (67.7MB/s), 64.5MiB/s-64.5MiB/s (67.7MB/s-67.7MB/s), io=645MiB (677MB), run=10001-10001msec 00:14:28.325 00:14:28.325 Disk stats (read/write): 00:14:28.325 ublkb0: ios=0/163481, merge=0/0, ticks=0/8694, in_queue=8695, util=98.02% 00:14:28.325 15:50:01 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:28.325 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.325 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.325 [2024-07-20 15:50:01.034120] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.325 [2024-07-20 15:50:01.069428] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.325 [2024-07-20 15:50:01.070496] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.325 [2024-07-20 15:50:01.078542] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.325 [2024-07-20 15:50:01.078825] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:28.325 [2024-07-20 15:50:01.078840] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:28.325 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.325 15:50:01 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:28.325 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:14:28.325 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:28.325 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:14:28.325 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.325 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:14:28.325 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.325 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:14:28.325 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.325 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.325 [2024-07-20 15:50:01.101463] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:28.325 request: 00:14:28.325 { 00:14:28.325 "ublk_id": 0, 00:14:28.325 "method": "ublk_stop_disk", 00:14:28.325 "req_id": 1 00:14:28.325 } 00:14:28.325 Got JSON-RPC error response 00:14:28.325 response: 00:14:28.325 { 00:14:28.325 "code": -19, 00:14:28.326 "message": "No such device" 00:14:28.326 } 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:28.326 15:50:01 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 [2024-07-20 15:50:01.125489] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:28.326 [2024-07-20 15:50:01.127511] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:28.326 [2024-07-20 15:50:01.127551] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.326 15:50:01 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.326 15:50:01 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:28.326 15:50:01 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.326 15:50:01 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:28.326 15:50:01 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:28.326 15:50:01 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:28.326 15:50:01 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.326 15:50:01 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:28.326 15:50:01 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:28.326 15:50:01 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:28.326 00:14:28.326 real 0m10.904s 00:14:28.326 user 0m0.651s 00:14:28.326 sys 0m0.998s 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:28.326 ************************************ 00:14:28.326 END TEST test_create_ublk 00:14:28.326 ************************************ 00:14:28.326 15:50:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 15:50:01 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:28.326 15:50:01 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:28.326 15:50:01 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:28.326 15:50:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 ************************************ 00:14:28.326 START TEST test_create_multi_ublk 00:14:28.326 ************************************ 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@1121 -- # test_create_multi_ublk 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 [2024-07-20 15:50:01.388378] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:28.326 [2024-07-20 15:50:01.389335] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 [2024-07-20 15:50:01.476556] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:28.326 [2024-07-20 15:50:01.477002] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:28.326 [2024-07-20 15:50:01.477020] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:28.326 [2024-07-20 15:50:01.477031] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:28.326 [2024-07-20 15:50:01.485637] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:28.326 [2024-07-20 15:50:01.485665] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:28.326 [2024-07-20 15:50:01.492381] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:28.326 [2024-07-20 15:50:01.492930] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:28.326 [2024-07-20 15:50:01.501835] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 [2024-07-20 15:50:01.590518] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:28.326 [2024-07-20 15:50:01.590954] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:28.326 [2024-07-20 15:50:01.590974] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:28.326 [2024-07-20 15:50:01.590984] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:28.326 [2024-07-20 15:50:01.603377] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:28.326 [2024-07-20 15:50:01.603401] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:28.326 [2024-07-20 15:50:01.617378] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:28.326 [2024-07-20 15:50:01.617951] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:28.326 [2024-07-20 15:50:01.629428] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 [2024-07-20 15:50:01.721536] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:28.326 [2024-07-20 15:50:01.722032] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:28.326 [2024-07-20 15:50:01.722049] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:28.326 [2024-07-20 15:50:01.722060] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:28.326 [2024-07-20 15:50:01.729398] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:28.326 [2024-07-20 15:50:01.729426] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:28.326 [2024-07-20 15:50:01.737383] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:28.326 [2024-07-20 15:50:01.737941] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:28.326 [2024-07-20 15:50:01.740803] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.326 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.326 [2024-07-20 15:50:01.818499] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:28.326 [2024-07-20 15:50:01.818944] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:28.326 [2024-07-20 15:50:01.818964] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:28.326 [2024-07-20 15:50:01.818973] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:28.326 [2024-07-20 15:50:01.826407] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:28.326 [2024-07-20 15:50:01.826429] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:28.327 [2024-07-20 15:50:01.834411] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:28.327 [2024-07-20 15:50:01.834982] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:28.327 [2024-07-20 15:50:01.839812] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:28.327 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.327 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:28.327 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:28.327 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.327 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.327 15:50:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.327 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:28.327 { 00:14:28.327 "ublk_device": "/dev/ublkb0", 00:14:28.327 "id": 0, 00:14:28.327 "queue_depth": 512, 00:14:28.327 "num_queues": 4, 00:14:28.327 "bdev_name": "Malloc0" 00:14:28.327 }, 00:14:28.327 { 00:14:28.327 "ublk_device": "/dev/ublkb1", 00:14:28.327 "id": 1, 00:14:28.327 "queue_depth": 512, 00:14:28.327 "num_queues": 4, 00:14:28.327 "bdev_name": "Malloc1" 00:14:28.327 }, 00:14:28.327 { 00:14:28.327 "ublk_device": "/dev/ublkb2", 00:14:28.327 "id": 2, 00:14:28.327 "queue_depth": 512, 00:14:28.327 "num_queues": 4, 00:14:28.327 "bdev_name": "Malloc2" 00:14:28.327 }, 00:14:28.327 { 00:14:28.327 "ublk_device": "/dev/ublkb3", 00:14:28.327 "id": 3, 00:14:28.327 "queue_depth": 512, 00:14:28.327 "num_queues": 4, 00:14:28.327 "bdev_name": "Malloc3" 00:14:28.327 } 00:14:28.327 ]' 00:14:28.327 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:28.327 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.327 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:28.327 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:28.327 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:28.327 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:28.327 15:50:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.327 [2024-07-20 15:50:02.724489] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.327 [2024-07-20 15:50:02.762761] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.327 [2024-07-20 15:50:02.764236] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.327 [2024-07-20 15:50:02.770394] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.327 [2024-07-20 15:50:02.770673] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:28.327 [2024-07-20 15:50:02.770686] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.327 [2024-07-20 15:50:02.786462] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.327 [2024-07-20 15:50:02.818752] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.327 [2024-07-20 15:50:02.820175] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.327 [2024-07-20 15:50:02.826391] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.327 [2024-07-20 15:50:02.826655] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:28.327 [2024-07-20 15:50:02.826668] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.327 [2024-07-20 15:50:02.841483] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.327 [2024-07-20 15:50:02.876820] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.327 [2024-07-20 15:50:02.879712] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.327 [2024-07-20 15:50:02.884406] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.327 [2024-07-20 15:50:02.884659] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:28.327 [2024-07-20 15:50:02.884675] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.327 [2024-07-20 15:50:02.892512] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.327 [2024-07-20 15:50:02.931809] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.327 [2024-07-20 15:50:02.936612] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.327 [2024-07-20 15:50:02.944406] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.327 [2024-07-20 15:50:02.944653] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:28.327 [2024-07-20 15:50:02.944666] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.327 15:50:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:28.586 [2024-07-20 15:50:03.126455] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:28.586 [2024-07-20 15:50:03.127792] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:28.586 [2024-07-20 15:50:03.127832] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:28.586 15:50:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:28.586 15:50:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.586 15:50:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:28.586 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.586 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.586 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.586 15:50:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.586 15:50:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:28.586 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.586 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.586 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:28.587 15:50:03 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:28.845 15:50:03 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:28.845 15:50:03 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:28.845 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.845 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.845 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.845 15:50:03 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:28.845 15:50:03 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:28.845 ************************************ 00:14:28.845 END TEST test_create_multi_ublk 00:14:28.845 ************************************ 00:14:28.845 15:50:03 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:28.845 00:14:28.845 real 0m2.083s 00:14:28.845 user 0m0.964s 00:14:28.845 sys 0m0.216s 00:14:28.845 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:28.845 15:50:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.845 15:50:03 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:28.845 15:50:03 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:28.845 15:50:03 ublk -- ublk/ublk.sh@130 -- # killprocess 86237 00:14:28.845 15:50:03 ublk -- common/autotest_common.sh@946 -- # '[' -z 86237 ']' 00:14:28.845 15:50:03 ublk -- common/autotest_common.sh@950 -- # kill -0 86237 00:14:28.845 15:50:03 ublk -- common/autotest_common.sh@951 -- # uname 00:14:28.845 15:50:03 ublk -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:28.845 15:50:03 ublk -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86237 00:14:28.845 killing process with pid 86237 00:14:28.845 15:50:03 ublk -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:28.845 15:50:03 ublk -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:28.845 15:50:03 ublk -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86237' 00:14:28.845 15:50:03 ublk -- common/autotest_common.sh@965 -- # kill 86237 00:14:28.845 15:50:03 ublk -- common/autotest_common.sh@970 -- # wait 86237 00:14:29.104 [2024-07-20 15:50:03.696044] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:29.104 [2024-07-20 15:50:03.696105] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:29.363 00:14:29.363 real 0m18.004s 00:14:29.363 user 0m28.603s 00:14:29.363 sys 0m7.105s 00:14:29.363 15:50:03 ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:29.363 ************************************ 00:14:29.363 END TEST ublk 00:14:29.363 ************************************ 00:14:29.363 15:50:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:29.363 15:50:03 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:29.363 15:50:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:29.363 15:50:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:29.363 15:50:03 -- common/autotest_common.sh@10 -- # set +x 00:14:29.363 ************************************ 00:14:29.363 START TEST ublk_recovery 00:14:29.363 ************************************ 00:14:29.363 15:50:04 ublk_recovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:29.363 * Looking for test storage... 00:14:29.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:29.363 15:50:04 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:29.363 15:50:04 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:29.363 15:50:04 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:29.363 15:50:04 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:29.363 15:50:04 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:29.363 15:50:04 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:29.363 15:50:04 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:29.363 15:50:04 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:29.363 15:50:04 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:29.363 15:50:04 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:29.363 15:50:04 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=86577 00:14:29.363 15:50:04 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:29.363 15:50:04 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:29.363 15:50:04 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 86577 00:14:29.363 15:50:04 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 86577 ']' 00:14:29.363 15:50:04 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.363 15:50:04 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:29.363 15:50:04 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.363 15:50:04 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:29.363 15:50:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.622 [2024-07-20 15:50:04.239611] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:29.622 [2024-07-20 15:50:04.239733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86577 ] 00:14:29.622 [2024-07-20 15:50:04.391095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:29.880 [2024-07-20 15:50:04.435321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.880 [2024-07-20 15:50:04.435486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.459 15:50:05 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:30.459 15:50:05 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:14:30.459 15:50:05 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:30.459 15:50:05 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.459 15:50:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.459 [2024-07-20 15:50:05.020388] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:30.459 [2024-07-20 15:50:05.021736] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:30.459 15:50:05 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.459 15:50:05 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:30.459 15:50:05 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.459 15:50:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.459 malloc0 00:14:30.459 15:50:05 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.459 15:50:05 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:30.459 15:50:05 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.459 15:50:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.459 [2024-07-20 15:50:05.068527] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:30.459 [2024-07-20 15:50:05.068654] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:30.459 [2024-07-20 15:50:05.068668] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:30.459 [2024-07-20 15:50:05.068676] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:30.459 [2024-07-20 15:50:05.077471] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:30.459 [2024-07-20 15:50:05.077493] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:30.459 [2024-07-20 15:50:05.084398] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:30.459 [2024-07-20 15:50:05.084544] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:30.459 [2024-07-20 15:50:05.094393] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:30.459 1 00:14:30.459 15:50:05 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.459 15:50:05 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:31.392 15:50:06 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=86610 00:14:31.392 15:50:06 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:31.392 15:50:06 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:31.650 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:31.650 fio-3.35 00:14:31.650 Starting 1 process 00:14:36.915 15:50:11 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 86577 00:14:36.915 15:50:11 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:42.233 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 86577 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:42.233 15:50:16 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=86721 00:14:42.233 15:50:16 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:42.233 15:50:16 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.233 15:50:16 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 86721 00:14:42.233 15:50:16 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 86721 ']' 00:14:42.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.233 15:50:16 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.233 15:50:16 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:42.233 15:50:16 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.233 15:50:16 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:42.233 15:50:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.233 [2024-07-20 15:50:16.213845] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:42.233 [2024-07-20 15:50:16.213967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86721 ] 00:14:42.233 [2024-07-20 15:50:16.363443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:42.233 [2024-07-20 15:50:16.405777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.233 [2024-07-20 15:50:16.405901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.233 15:50:16 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:42.233 15:50:16 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:14:42.233 15:50:16 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:42.233 15:50:16 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.233 15:50:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.233 [2024-07-20 15:50:16.990379] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:42.233 [2024-07-20 15:50:16.991718] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:42.234 15:50:16 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.234 15:50:16 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:42.234 15:50:16 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.234 15:50:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.492 malloc0 00:14:42.492 15:50:17 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.492 15:50:17 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:42.492 15:50:17 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.492 15:50:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.492 [2024-07-20 15:50:17.034771] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:42.492 [2024-07-20 15:50:17.034818] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:42.492 [2024-07-20 15:50:17.034830] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:42.492 [2024-07-20 15:50:17.042427] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:42.492 [2024-07-20 15:50:17.042458] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:42.492 [2024-07-20 15:50:17.042536] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:42.492 1 00:14:42.492 15:50:17 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.492 15:50:17 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 86610 00:14:42.492 [2024-07-20 15:50:17.050407] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:42.492 [2024-07-20 15:50:17.054022] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:42.492 [2024-07-20 15:50:17.057586] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:42.492 [2024-07-20 15:50:17.057607] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:38.716 00:15:38.716 fio_test: (groupid=0, jobs=1): err= 0: pid=86613: Sat Jul 20 15:51:06 2024 00:15:38.716 read: IOPS=24.0k, BW=93.6MiB/s (98.2MB/s)(5618MiB/60002msec) 00:15:38.716 slat (nsec): min=1849, max=306166, avg=6723.35, stdev=1847.70 00:15:38.716 clat (usec): min=1025, max=5953.9k, avg=2599.83, stdev=37789.00 00:15:38.716 lat (usec): min=1029, max=5954.0k, avg=2606.56, stdev=37789.00 00:15:38.716 clat percentiles (usec): 00:15:38.716 | 1.00th=[ 1876], 5.00th=[ 2057], 10.00th=[ 2114], 20.00th=[ 2147], 00:15:38.716 | 30.00th=[ 2180], 40.00th=[ 2212], 50.00th=[ 2212], 60.00th=[ 2245], 00:15:38.716 | 70.00th=[ 2278], 80.00th=[ 2311], 90.00th=[ 2671], 95.00th=[ 3556], 00:15:38.716 | 99.00th=[ 4817], 99.50th=[ 5276], 99.90th=[ 6456], 99.95th=[ 7373], 00:15:38.716 | 99.99th=[12518] 00:15:38.716 bw ( KiB/s): min=29544, max=109592, per=100.00%, avg=105657.60, stdev=10073.76, samples=108 00:15:38.716 iops : min= 7386, max=27398, avg=26414.38, stdev=2518.44, samples=108 00:15:38.716 write: IOPS=23.9k, BW=93.5MiB/s (98.1MB/s)(5612MiB/60002msec); 0 zone resets 00:15:38.716 slat (nsec): min=1826, max=360239, avg=6752.17, stdev=1878.23 00:15:38.716 clat (usec): min=993, max=5954.3k, avg=2728.13, stdev=41536.29 00:15:38.716 lat (usec): min=997, max=5954.3k, avg=2734.88, stdev=41536.29 00:15:38.716 clat percentiles (usec): 00:15:38.716 | 1.00th=[ 1876], 5.00th=[ 2024], 10.00th=[ 2180], 20.00th=[ 2245], 00:15:38.716 | 30.00th=[ 2278], 40.00th=[ 2311], 50.00th=[ 2311], 60.00th=[ 2343], 00:15:38.716 | 70.00th=[ 2376], 80.00th=[ 2409], 90.00th=[ 2638], 95.00th=[ 3589], 00:15:38.716 | 99.00th=[ 4817], 99.50th=[ 5276], 99.90th=[ 6587], 99.95th=[ 7504], 00:15:38.716 | 99.99th=[12649] 00:15:38.716 bw ( KiB/s): min=30408, max=109488, per=100.00%, avg=105531.07, stdev=9922.39, samples=108 00:15:38.716 iops : min= 7602, max=27372, avg=26382.75, stdev=2480.59, samples=108 00:15:38.716 lat (usec) : 1000=0.01% 00:15:38.716 lat (msec) : 2=3.72%, 4=93.25%, 10=3.02%, 20=0.01%, >=2000=0.01% 00:15:38.716 cpu : usr=11.77%, sys=31.67%, ctx=122679, majf=0, minf=13 00:15:38.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:38.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:38.716 issued rwts: total=1438282,1436799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:38.716 00:15:38.716 Run status group 0 (all jobs): 00:15:38.716 READ: bw=93.6MiB/s (98.2MB/s), 93.6MiB/s-93.6MiB/s (98.2MB/s-98.2MB/s), io=5618MiB (5891MB), run=60002-60002msec 00:15:38.716 WRITE: bw=93.5MiB/s (98.1MB/s), 93.5MiB/s-93.5MiB/s (98.1MB/s-98.1MB/s), io=5612MiB (5885MB), run=60002-60002msec 00:15:38.716 00:15:38.716 Disk stats (read/write): 00:15:38.716 ublkb1: ios=1435342/1433768, merge=0/0, ticks=3631049/3674653, in_queue=7305702, util=99.93% 00:15:38.716 15:51:06 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:38.716 15:51:06 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.716 15:51:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.716 [2024-07-20 15:51:06.370344] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:38.717 [2024-07-20 15:51:06.415481] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:38.717 [2024-07-20 15:51:06.415713] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:38.717 [2024-07-20 15:51:06.423452] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:38.717 [2024-07-20 15:51:06.423566] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:38.717 [2024-07-20 15:51:06.423576] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.717 15:51:06 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.717 [2024-07-20 15:51:06.439487] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:38.717 [2024-07-20 15:51:06.441321] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:38.717 [2024-07-20 15:51:06.441376] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.717 15:51:06 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:38.717 15:51:06 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:38.717 15:51:06 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 86721 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@946 -- # '[' -z 86721 ']' 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@950 -- # kill -0 86721 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@951 -- # uname 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86721 00:15:38.717 killing process with pid 86721 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86721' 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@965 -- # kill 86721 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@970 -- # wait 86721 00:15:38.717 [2024-07-20 15:51:06.627592] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:38.717 [2024-07-20 15:51:06.627683] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:38.717 00:15:38.717 real 1m2.875s 00:15:38.717 user 1m44.077s 00:15:38.717 sys 0m37.160s 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:38.717 15:51:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.717 ************************************ 00:15:38.717 END TEST ublk_recovery 00:15:38.717 ************************************ 00:15:38.717 15:51:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:15:38.717 15:51:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:15:38.717 15:51:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:38.717 15:51:06 -- common/autotest_common.sh@10 -- # set +x 00:15:38.717 15:51:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:15:38.717 15:51:07 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:15:38.717 15:51:07 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:15:38.717 15:51:07 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:15:38.717 15:51:07 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:15:38.717 15:51:07 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:15:38.717 15:51:07 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:15:38.717 15:51:07 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:15:38.717 15:51:07 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:15:38.717 15:51:07 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:15:38.717 15:51:07 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:38.717 15:51:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:38.717 15:51:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:38.717 15:51:07 -- common/autotest_common.sh@10 -- # set +x 00:15:38.717 ************************************ 00:15:38.717 START TEST ftl 00:15:38.717 ************************************ 00:15:38.717 15:51:07 ftl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:38.717 * Looking for test storage... 00:15:38.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:38.717 15:51:07 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:38.717 15:51:07 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:38.717 15:51:07 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:38.717 15:51:07 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:38.717 15:51:07 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:38.717 15:51:07 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:38.717 15:51:07 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:38.717 15:51:07 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:38.717 15:51:07 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:38.717 15:51:07 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.717 15:51:07 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.717 15:51:07 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:38.717 15:51:07 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:38.717 15:51:07 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:38.717 15:51:07 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:38.717 15:51:07 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:38.717 15:51:07 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:38.717 15:51:07 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.717 15:51:07 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.717 15:51:07 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:38.717 15:51:07 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:38.717 15:51:07 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:38.717 15:51:07 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:38.717 15:51:07 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:38.717 15:51:07 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:38.717 15:51:07 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:38.717 15:51:07 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:38.717 15:51:07 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:38.717 15:51:07 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:38.717 15:51:07 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:38.717 15:51:07 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:38.717 15:51:07 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:38.717 15:51:07 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:38.717 15:51:07 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:38.717 15:51:07 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:38.717 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:38.717 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:38.717 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:38.717 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:38.717 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:38.717 15:51:07 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=87506 00:15:38.717 15:51:07 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:38.717 15:51:07 ftl -- ftl/ftl.sh@38 -- # waitforlisten 87506 00:15:38.717 15:51:07 ftl -- common/autotest_common.sh@827 -- # '[' -z 87506 ']' 00:15:38.717 15:51:07 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.717 15:51:07 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:38.717 15:51:07 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.717 15:51:07 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:38.717 15:51:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:38.717 [2024-07-20 15:51:08.043006] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:38.717 [2024-07-20 15:51:08.043137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87506 ] 00:15:38.717 [2024-07-20 15:51:08.191748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.717 [2024-07-20 15:51:08.232071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.717 15:51:08 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:38.717 15:51:08 ftl -- common/autotest_common.sh@860 -- # return 0 00:15:38.717 15:51:08 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:38.717 15:51:08 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:38.717 15:51:09 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:38.717 15:51:09 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:38.717 15:51:09 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:38.717 15:51:09 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:38.717 15:51:09 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:38.717 15:51:09 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:38.717 15:51:09 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:38.717 15:51:09 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:38.717 15:51:09 ftl -- ftl/ftl.sh@50 -- # break 00:15:38.717 15:51:09 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:38.717 15:51:09 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:38.717 15:51:09 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:38.717 15:51:09 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:38.717 15:51:10 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:38.717 15:51:10 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:38.717 15:51:10 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:38.717 15:51:10 ftl -- ftl/ftl.sh@63 -- # break 00:15:38.717 15:51:10 ftl -- ftl/ftl.sh@66 -- # killprocess 87506 00:15:38.717 15:51:10 ftl -- common/autotest_common.sh@946 -- # '[' -z 87506 ']' 00:15:38.717 15:51:10 ftl -- common/autotest_common.sh@950 -- # kill -0 87506 00:15:38.717 15:51:10 ftl -- common/autotest_common.sh@951 -- # uname 00:15:38.717 15:51:10 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:38.717 15:51:10 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87506 00:15:38.717 15:51:10 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:38.717 killing process with pid 87506 00:15:38.717 15:51:10 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:38.717 15:51:10 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87506' 00:15:38.717 15:51:10 ftl -- common/autotest_common.sh@965 -- # kill 87506 00:15:38.717 15:51:10 ftl -- common/autotest_common.sh@970 -- # wait 87506 00:15:38.717 15:51:10 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:38.717 15:51:10 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:38.717 15:51:10 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:38.717 15:51:10 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:38.717 15:51:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:38.717 ************************************ 00:15:38.717 START TEST ftl_fio_basic 00:15:38.717 ************************************ 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:38.717 * Looking for test storage... 00:15:38.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:38.717 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=87608 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 87608 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- common/autotest_common.sh@827 -- # '[' -z 87608 ']' 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:38.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:38.718 15:51:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:38.718 [2024-07-20 15:51:10.856880] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:38.718 [2024-07-20 15:51:10.857075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87608 ] 00:15:38.718 [2024-07-20 15:51:11.017116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:38.718 [2024-07-20 15:51:11.060667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.718 [2024-07-20 15:51:11.060904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.718 [2024-07-20 15:51:11.060937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # return 0 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:38.718 15:51:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:38.718 { 00:15:38.718 "name": "nvme0n1", 00:15:38.718 "aliases": [ 00:15:38.718 "088ff5f0-675b-4351-92fd-016c12687c89" 00:15:38.718 ], 00:15:38.718 "product_name": "NVMe disk", 00:15:38.718 "block_size": 4096, 00:15:38.718 "num_blocks": 1310720, 00:15:38.718 "uuid": "088ff5f0-675b-4351-92fd-016c12687c89", 00:15:38.718 "assigned_rate_limits": { 00:15:38.718 "rw_ios_per_sec": 0, 00:15:38.718 "rw_mbytes_per_sec": 0, 00:15:38.718 "r_mbytes_per_sec": 0, 00:15:38.718 "w_mbytes_per_sec": 0 00:15:38.718 }, 00:15:38.718 "claimed": false, 00:15:38.718 "zoned": false, 00:15:38.718 "supported_io_types": { 00:15:38.718 "read": true, 00:15:38.718 "write": true, 00:15:38.718 "unmap": true, 00:15:38.718 "write_zeroes": true, 00:15:38.718 "flush": true, 00:15:38.718 "reset": true, 00:15:38.718 "compare": true, 00:15:38.718 "compare_and_write": false, 00:15:38.718 "abort": true, 00:15:38.718 "nvme_admin": true, 00:15:38.718 "nvme_io": true 00:15:38.718 }, 00:15:38.718 "driver_specific": { 00:15:38.718 "nvme": [ 00:15:38.718 { 00:15:38.718 "pci_address": "0000:00:11.0", 00:15:38.718 "trid": { 00:15:38.718 "trtype": "PCIe", 00:15:38.718 "traddr": "0000:00:11.0" 00:15:38.718 }, 00:15:38.718 "ctrlr_data": { 00:15:38.718 "cntlid": 0, 00:15:38.718 "vendor_id": "0x1b36", 00:15:38.718 "model_number": "QEMU NVMe Ctrl", 00:15:38.718 "serial_number": "12341", 00:15:38.718 "firmware_revision": "8.0.0", 00:15:38.718 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:38.718 "oacs": { 00:15:38.718 "security": 0, 00:15:38.718 "format": 1, 00:15:38.718 "firmware": 0, 00:15:38.718 "ns_manage": 1 00:15:38.718 }, 00:15:38.718 "multi_ctrlr": false, 00:15:38.718 "ana_reporting": false 00:15:38.718 }, 00:15:38.718 "vs": { 00:15:38.718 "nvme_version": "1.4" 00:15:38.718 }, 00:15:38.718 "ns_data": { 00:15:38.718 "id": 1, 00:15:38.718 "can_share": false 00:15:38.718 } 00:15:38.718 } 00:15:38.718 ], 00:15:38.718 "mp_policy": "active_passive" 00:15:38.718 } 00:15:38.718 } 00:15:38.718 ]' 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=1310720 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 5120 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=a690e30b-dd9d-4689-8630-375f2801af68 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a690e30b-dd9d-4689-8630-375f2801af68 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=26e9cbe7-a489-4b7f-bd27-3ce42ce97889 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 26e9cbe7-a489-4b7f-bd27-3ce42ce97889 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=26e9cbe7-a489-4b7f-bd27-3ce42ce97889 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 26e9cbe7-a489-4b7f-bd27-3ce42ce97889 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=26e9cbe7-a489-4b7f-bd27-3ce42ce97889 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 26e9cbe7-a489-4b7f-bd27-3ce42ce97889 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:38.718 { 00:15:38.718 "name": "26e9cbe7-a489-4b7f-bd27-3ce42ce97889", 00:15:38.718 "aliases": [ 00:15:38.718 "lvs/nvme0n1p0" 00:15:38.718 ], 00:15:38.718 "product_name": "Logical Volume", 00:15:38.718 "block_size": 4096, 00:15:38.718 "num_blocks": 26476544, 00:15:38.718 "uuid": "26e9cbe7-a489-4b7f-bd27-3ce42ce97889", 00:15:38.718 "assigned_rate_limits": { 00:15:38.718 "rw_ios_per_sec": 0, 00:15:38.718 "rw_mbytes_per_sec": 0, 00:15:38.718 "r_mbytes_per_sec": 0, 00:15:38.718 "w_mbytes_per_sec": 0 00:15:38.718 }, 00:15:38.718 "claimed": false, 00:15:38.718 "zoned": false, 00:15:38.718 "supported_io_types": { 00:15:38.718 "read": true, 00:15:38.718 "write": true, 00:15:38.718 "unmap": true, 00:15:38.718 "write_zeroes": true, 00:15:38.718 "flush": false, 00:15:38.718 "reset": true, 00:15:38.718 "compare": false, 00:15:38.718 "compare_and_write": false, 00:15:38.718 "abort": false, 00:15:38.718 "nvme_admin": false, 00:15:38.718 "nvme_io": false 00:15:38.718 }, 00:15:38.718 "driver_specific": { 00:15:38.718 "lvol": { 00:15:38.718 "lvol_store_uuid": "a690e30b-dd9d-4689-8630-375f2801af68", 00:15:38.718 "base_bdev": "nvme0n1", 00:15:38.718 "thin_provision": true, 00:15:38.718 "num_allocated_clusters": 0, 00:15:38.718 "snapshot": false, 00:15:38.718 "clone": false, 00:15:38.718 "esnap_clone": false 00:15:38.718 } 00:15:38.718 } 00:15:38.718 } 00:15:38.718 ]' 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:38.718 15:51:12 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 26e9cbe7-a489-4b7f-bd27-3ce42ce97889 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=26e9cbe7-a489-4b7f-bd27-3ce42ce97889 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 26e9cbe7-a489-4b7f-bd27-3ce42ce97889 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:38.718 { 00:15:38.718 "name": "26e9cbe7-a489-4b7f-bd27-3ce42ce97889", 00:15:38.718 "aliases": [ 00:15:38.718 "lvs/nvme0n1p0" 00:15:38.718 ], 00:15:38.718 "product_name": "Logical Volume", 00:15:38.718 "block_size": 4096, 00:15:38.718 "num_blocks": 26476544, 00:15:38.718 "uuid": "26e9cbe7-a489-4b7f-bd27-3ce42ce97889", 00:15:38.718 "assigned_rate_limits": { 00:15:38.718 "rw_ios_per_sec": 0, 00:15:38.718 "rw_mbytes_per_sec": 0, 00:15:38.718 "r_mbytes_per_sec": 0, 00:15:38.718 "w_mbytes_per_sec": 0 00:15:38.718 }, 00:15:38.718 "claimed": false, 00:15:38.718 "zoned": false, 00:15:38.718 "supported_io_types": { 00:15:38.718 "read": true, 00:15:38.718 "write": true, 00:15:38.718 "unmap": true, 00:15:38.718 "write_zeroes": true, 00:15:38.718 "flush": false, 00:15:38.718 "reset": true, 00:15:38.718 "compare": false, 00:15:38.718 "compare_and_write": false, 00:15:38.718 "abort": false, 00:15:38.718 "nvme_admin": false, 00:15:38.718 "nvme_io": false 00:15:38.718 }, 00:15:38.718 "driver_specific": { 00:15:38.718 "lvol": { 00:15:38.718 "lvol_store_uuid": "a690e30b-dd9d-4689-8630-375f2801af68", 00:15:38.718 "base_bdev": "nvme0n1", 00:15:38.718 "thin_provision": true, 00:15:38.718 "num_allocated_clusters": 0, 00:15:38.718 "snapshot": false, 00:15:38.718 "clone": false, 00:15:38.718 "esnap_clone": false 00:15:38.718 } 00:15:38.718 } 00:15:38.718 } 00:15:38.718 ]' 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:38.718 15:51:13 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:38.719 15:51:13 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:38.977 15:51:13 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:38.977 15:51:13 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:38.977 15:51:13 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:38.977 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:38.977 15:51:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 26e9cbe7-a489-4b7f-bd27-3ce42ce97889 00:15:38.977 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=26e9cbe7-a489-4b7f-bd27-3ce42ce97889 00:15:38.977 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:38.977 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:38.977 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:38.977 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 26e9cbe7-a489-4b7f-bd27-3ce42ce97889 00:15:39.235 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:39.235 { 00:15:39.235 "name": "26e9cbe7-a489-4b7f-bd27-3ce42ce97889", 00:15:39.235 "aliases": [ 00:15:39.235 "lvs/nvme0n1p0" 00:15:39.235 ], 00:15:39.235 "product_name": "Logical Volume", 00:15:39.235 "block_size": 4096, 00:15:39.235 "num_blocks": 26476544, 00:15:39.235 "uuid": "26e9cbe7-a489-4b7f-bd27-3ce42ce97889", 00:15:39.235 "assigned_rate_limits": { 00:15:39.235 "rw_ios_per_sec": 0, 00:15:39.235 "rw_mbytes_per_sec": 0, 00:15:39.235 "r_mbytes_per_sec": 0, 00:15:39.235 "w_mbytes_per_sec": 0 00:15:39.235 }, 00:15:39.235 "claimed": false, 00:15:39.235 "zoned": false, 00:15:39.235 "supported_io_types": { 00:15:39.235 "read": true, 00:15:39.235 "write": true, 00:15:39.235 "unmap": true, 00:15:39.235 "write_zeroes": true, 00:15:39.235 "flush": false, 00:15:39.235 "reset": true, 00:15:39.235 "compare": false, 00:15:39.235 "compare_and_write": false, 00:15:39.235 "abort": false, 00:15:39.235 "nvme_admin": false, 00:15:39.235 "nvme_io": false 00:15:39.235 }, 00:15:39.235 "driver_specific": { 00:15:39.235 "lvol": { 00:15:39.235 "lvol_store_uuid": "a690e30b-dd9d-4689-8630-375f2801af68", 00:15:39.235 "base_bdev": "nvme0n1", 00:15:39.235 "thin_provision": true, 00:15:39.235 "num_allocated_clusters": 0, 00:15:39.235 "snapshot": false, 00:15:39.235 "clone": false, 00:15:39.235 "esnap_clone": false 00:15:39.235 } 00:15:39.235 } 00:15:39.235 } 00:15:39.235 ]' 00:15:39.235 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:39.235 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:39.235 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:39.235 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:39.235 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:39.235 15:51:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:39.235 15:51:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:39.235 15:51:13 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:39.235 15:51:13 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 26e9cbe7-a489-4b7f-bd27-3ce42ce97889 -c nvc0n1p0 --l2p_dram_limit 60 00:15:39.494 [2024-07-20 15:51:14.065998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.494 [2024-07-20 15:51:14.066050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:39.494 [2024-07-20 15:51:14.066069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:39.494 [2024-07-20 15:51:14.066079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.494 [2024-07-20 15:51:14.066152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.494 [2024-07-20 15:51:14.066167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:39.494 [2024-07-20 15:51:14.066179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:15:39.494 [2024-07-20 15:51:14.066190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.494 [2024-07-20 15:51:14.066237] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:39.494 [2024-07-20 15:51:14.066543] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:39.494 [2024-07-20 15:51:14.066568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.494 [2024-07-20 15:51:14.066579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:39.494 [2024-07-20 15:51:14.066605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:15:39.494 [2024-07-20 15:51:14.066626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.494 [2024-07-20 15:51:14.066710] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0caac309-7e72-4848-ba02-ac86f13fd709 00:15:39.494 [2024-07-20 15:51:14.068145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.494 [2024-07-20 15:51:14.068178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:39.494 [2024-07-20 15:51:14.068190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:15:39.494 [2024-07-20 15:51:14.068204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.494 [2024-07-20 15:51:14.075712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.494 [2024-07-20 15:51:14.075746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:39.494 [2024-07-20 15:51:14.075772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.429 ms 00:15:39.494 [2024-07-20 15:51:14.075788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.494 [2024-07-20 15:51:14.075896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.494 [2024-07-20 15:51:14.075913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:39.494 [2024-07-20 15:51:14.075936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:15:39.494 [2024-07-20 15:51:14.075959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.494 [2024-07-20 15:51:14.076027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.494 [2024-07-20 15:51:14.076042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:39.494 [2024-07-20 15:51:14.076053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:39.494 [2024-07-20 15:51:14.076064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.494 [2024-07-20 15:51:14.076122] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:39.494 [2024-07-20 15:51:14.077929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.494 [2024-07-20 15:51:14.077957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:39.494 [2024-07-20 15:51:14.077971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.819 ms 00:15:39.494 [2024-07-20 15:51:14.077981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.494 [2024-07-20 15:51:14.078029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.494 [2024-07-20 15:51:14.078039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:39.494 [2024-07-20 15:51:14.078053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:39.494 [2024-07-20 15:51:14.078063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.494 [2024-07-20 15:51:14.078102] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:39.494 [2024-07-20 15:51:14.078256] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:39.494 [2024-07-20 15:51:14.078305] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:39.494 [2024-07-20 15:51:14.078322] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:15:39.494 [2024-07-20 15:51:14.078338] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:39.494 [2024-07-20 15:51:14.078353] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:39.494 [2024-07-20 15:51:14.078377] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:39.494 [2024-07-20 15:51:14.078387] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:39.494 [2024-07-20 15:51:14.078411] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:39.494 [2024-07-20 15:51:14.078420] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:39.494 [2024-07-20 15:51:14.078445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.494 [2024-07-20 15:51:14.078455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:39.494 [2024-07-20 15:51:14.078468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:15:39.494 [2024-07-20 15:51:14.078478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.494 [2024-07-20 15:51:14.078564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.494 [2024-07-20 15:51:14.078575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:39.494 [2024-07-20 15:51:14.078593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:15:39.495 [2024-07-20 15:51:14.078602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.495 [2024-07-20 15:51:14.078724] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:39.495 [2024-07-20 15:51:14.078738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:39.495 [2024-07-20 15:51:14.078752] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:39.495 [2024-07-20 15:51:14.078762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.495 [2024-07-20 15:51:14.078775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:39.495 [2024-07-20 15:51:14.078784] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:39.495 [2024-07-20 15:51:14.078795] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:39.495 [2024-07-20 15:51:14.078805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:39.495 [2024-07-20 15:51:14.078817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:39.495 [2024-07-20 15:51:14.078825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:39.495 [2024-07-20 15:51:14.078840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:39.495 [2024-07-20 15:51:14.078850] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:39.495 [2024-07-20 15:51:14.078862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:39.495 [2024-07-20 15:51:14.078872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:39.495 [2024-07-20 15:51:14.078886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:39.495 [2024-07-20 15:51:14.078895] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.495 [2024-07-20 15:51:14.078907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:39.495 [2024-07-20 15:51:14.078916] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:39.495 [2024-07-20 15:51:14.078927] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.495 [2024-07-20 15:51:14.078937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:39.495 [2024-07-20 15:51:14.078949] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:39.495 [2024-07-20 15:51:14.078958] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.495 [2024-07-20 15:51:14.078968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:39.495 [2024-07-20 15:51:14.078977] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:39.495 [2024-07-20 15:51:14.078988] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.495 [2024-07-20 15:51:14.078997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:39.495 [2024-07-20 15:51:14.079010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:39.495 [2024-07-20 15:51:14.079018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.495 [2024-07-20 15:51:14.079030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:39.495 [2024-07-20 15:51:14.079039] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:39.495 [2024-07-20 15:51:14.079053] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.495 [2024-07-20 15:51:14.079062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:39.495 [2024-07-20 15:51:14.079073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:39.495 [2024-07-20 15:51:14.079082] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:39.495 [2024-07-20 15:51:14.079093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:39.495 [2024-07-20 15:51:14.079102] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:39.495 [2024-07-20 15:51:14.079113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:39.495 [2024-07-20 15:51:14.079122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:39.495 [2024-07-20 15:51:14.079132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:39.495 [2024-07-20 15:51:14.079141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.495 [2024-07-20 15:51:14.079152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:39.495 [2024-07-20 15:51:14.079161] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:39.495 [2024-07-20 15:51:14.079173] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.495 [2024-07-20 15:51:14.079182] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:39.495 [2024-07-20 15:51:14.079206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:39.495 [2024-07-20 15:51:14.079215] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:39.495 [2024-07-20 15:51:14.079232] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.495 [2024-07-20 15:51:14.079244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:39.495 [2024-07-20 15:51:14.079257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:39.495 [2024-07-20 15:51:14.079267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:39.495 [2024-07-20 15:51:14.079278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:39.495 [2024-07-20 15:51:14.079287] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:39.495 [2024-07-20 15:51:14.079299] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:39.495 [2024-07-20 15:51:14.079312] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:39.495 [2024-07-20 15:51:14.079327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:39.495 [2024-07-20 15:51:14.079338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:39.495 [2024-07-20 15:51:14.079350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:39.495 [2024-07-20 15:51:14.079630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:39.495 [2024-07-20 15:51:14.079686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:39.495 [2024-07-20 15:51:14.079732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:39.495 [2024-07-20 15:51:14.079781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:39.495 [2024-07-20 15:51:14.079905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:39.495 [2024-07-20 15:51:14.079962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:39.495 [2024-07-20 15:51:14.080008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:39.495 [2024-07-20 15:51:14.080120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:39.495 [2024-07-20 15:51:14.080171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:39.495 [2024-07-20 15:51:14.080219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:39.495 [2024-07-20 15:51:14.080264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:39.495 [2024-07-20 15:51:14.080377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:39.495 [2024-07-20 15:51:14.080431] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:39.495 [2024-07-20 15:51:14.080481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:39.495 [2024-07-20 15:51:14.080575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:39.495 [2024-07-20 15:51:14.080630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:39.495 [2024-07-20 15:51:14.080676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:39.495 [2024-07-20 15:51:14.080728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:39.495 [2024-07-20 15:51:14.080820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.495 [2024-07-20 15:51:14.080982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:39.495 [2024-07-20 15:51:14.081016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.153 ms 00:15:39.495 [2024-07-20 15:51:14.081050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.495 [2024-07-20 15:51:14.081166] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:39.495 [2024-07-20 15:51:14.081345] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:42.024 [2024-07-20 15:51:16.373732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.024 [2024-07-20 15:51:16.373992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:42.024 [2024-07-20 15:51:16.374018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2296.286 ms 00:15:42.024 [2024-07-20 15:51:16.374032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.024 [2024-07-20 15:51:16.385304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.024 [2024-07-20 15:51:16.385511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:42.024 [2024-07-20 15:51:16.385638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.181 ms 00:15:42.024 [2024-07-20 15:51:16.385680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.024 [2024-07-20 15:51:16.385888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.024 [2024-07-20 15:51:16.385937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:42.024 [2024-07-20 15:51:16.386040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:42.024 [2024-07-20 15:51:16.386079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.024 [2024-07-20 15:51:16.408847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.024 [2024-07-20 15:51:16.409022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:42.024 [2024-07-20 15:51:16.409158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.715 ms 00:15:42.024 [2024-07-20 15:51:16.409211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.024 [2024-07-20 15:51:16.409285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.024 [2024-07-20 15:51:16.409330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:42.024 [2024-07-20 15:51:16.409492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:42.024 [2024-07-20 15:51:16.409585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.024 [2024-07-20 15:51:16.410124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.024 [2024-07-20 15:51:16.410260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:42.024 [2024-07-20 15:51:16.410393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:15:42.024 [2024-07-20 15:51:16.410447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.024 [2024-07-20 15:51:16.410608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.024 [2024-07-20 15:51:16.410631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:42.024 [2024-07-20 15:51:16.410644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:15:42.024 [2024-07-20 15:51:16.410676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.024 [2024-07-20 15:51:16.418611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.024 [2024-07-20 15:51:16.418753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:42.024 [2024-07-20 15:51:16.418894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.914 ms 00:15:42.024 [2024-07-20 15:51:16.418943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.024 [2024-07-20 15:51:16.427688] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:42.024 [2024-07-20 15:51:16.444203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.024 [2024-07-20 15:51:16.444396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:42.024 [2024-07-20 15:51:16.444512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.162 ms 00:15:42.024 [2024-07-20 15:51:16.444565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.024 [2024-07-20 15:51:16.490772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.024 [2024-07-20 15:51:16.490972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:42.024 [2024-07-20 15:51:16.491071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.205 ms 00:15:42.024 [2024-07-20 15:51:16.491106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.024 [2024-07-20 15:51:16.491316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.024 [2024-07-20 15:51:16.491461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:42.024 [2024-07-20 15:51:16.491519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:15:42.024 [2024-07-20 15:51:16.491548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.024 [2024-07-20 15:51:16.494724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.024 [2024-07-20 15:51:16.494848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:42.024 [2024-07-20 15:51:16.494955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.126 ms 00:15:42.024 [2024-07-20 15:51:16.494991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.024 [2024-07-20 15:51:16.497713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.024 [2024-07-20 15:51:16.497832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:42.024 [2024-07-20 15:51:16.497957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:15:42.024 [2024-07-20 15:51:16.497988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.024 [2024-07-20 15:51:16.498293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.024 [2024-07-20 15:51:16.498338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:42.024 [2024-07-20 15:51:16.498534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:15:42.024 [2024-07-20 15:51:16.498570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.024 [2024-07-20 15:51:16.529425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.025 [2024-07-20 15:51:16.529576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:42.025 [2024-07-20 15:51:16.529655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.830 ms 00:15:42.025 [2024-07-20 15:51:16.529690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.025 [2024-07-20 15:51:16.534134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.025 [2024-07-20 15:51:16.534175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:42.025 [2024-07-20 15:51:16.534191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.339 ms 00:15:42.025 [2024-07-20 15:51:16.534201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.025 [2024-07-20 15:51:16.537384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.025 [2024-07-20 15:51:16.537415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:42.025 [2024-07-20 15:51:16.537430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.134 ms 00:15:42.025 [2024-07-20 15:51:16.537440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.025 [2024-07-20 15:51:16.541153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.025 [2024-07-20 15:51:16.541187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:42.025 [2024-07-20 15:51:16.541203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.671 ms 00:15:42.025 [2024-07-20 15:51:16.541213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.025 [2024-07-20 15:51:16.541278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.025 [2024-07-20 15:51:16.541291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:42.025 [2024-07-20 15:51:16.541304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:15:42.025 [2024-07-20 15:51:16.541314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.025 [2024-07-20 15:51:16.541428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.025 [2024-07-20 15:51:16.541447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:42.025 [2024-07-20 15:51:16.541460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:42.025 [2024-07-20 15:51:16.541470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.025 [2024-07-20 15:51:16.542606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2480.173 ms, result 0 00:15:42.025 { 00:15:42.025 "name": "ftl0", 00:15:42.025 "uuid": "0caac309-7e72-4848-ba02-ac86f13fd709" 00:15:42.025 } 00:15:42.025 15:51:16 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:42.025 15:51:16 ftl.ftl_fio_basic -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:15:42.025 15:51:16 ftl.ftl_fio_basic -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:42.025 15:51:16 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local i 00:15:42.025 15:51:16 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:42.025 15:51:16 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:42.025 15:51:16 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:42.025 15:51:16 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:42.283 [ 00:15:42.283 { 00:15:42.283 "name": "ftl0", 00:15:42.283 "aliases": [ 00:15:42.283 "0caac309-7e72-4848-ba02-ac86f13fd709" 00:15:42.283 ], 00:15:42.283 "product_name": "FTL disk", 00:15:42.283 "block_size": 4096, 00:15:42.283 "num_blocks": 20971520, 00:15:42.283 "uuid": "0caac309-7e72-4848-ba02-ac86f13fd709", 00:15:42.283 "assigned_rate_limits": { 00:15:42.283 "rw_ios_per_sec": 0, 00:15:42.283 "rw_mbytes_per_sec": 0, 00:15:42.283 "r_mbytes_per_sec": 0, 00:15:42.283 "w_mbytes_per_sec": 0 00:15:42.283 }, 00:15:42.283 "claimed": false, 00:15:42.283 "zoned": false, 00:15:42.283 "supported_io_types": { 00:15:42.283 "read": true, 00:15:42.283 "write": true, 00:15:42.283 "unmap": true, 00:15:42.283 "write_zeroes": true, 00:15:42.283 "flush": true, 00:15:42.283 "reset": false, 00:15:42.283 "compare": false, 00:15:42.283 "compare_and_write": false, 00:15:42.283 "abort": false, 00:15:42.283 "nvme_admin": false, 00:15:42.283 "nvme_io": false 00:15:42.283 }, 00:15:42.283 "driver_specific": { 00:15:42.283 "ftl": { 00:15:42.283 "base_bdev": "26e9cbe7-a489-4b7f-bd27-3ce42ce97889", 00:15:42.283 "cache": "nvc0n1p0" 00:15:42.283 } 00:15:42.283 } 00:15:42.283 } 00:15:42.283 ] 00:15:42.283 15:51:16 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # return 0 00:15:42.283 15:51:16 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:42.283 15:51:16 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:42.543 15:51:17 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:42.543 15:51:17 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:42.543 [2024-07-20 15:51:17.269110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.543 [2024-07-20 15:51:17.269333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:42.543 [2024-07-20 15:51:17.269375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:42.543 [2024-07-20 15:51:17.269390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.543 [2024-07-20 15:51:17.269438] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:42.543 [2024-07-20 15:51:17.270166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.543 [2024-07-20 15:51:17.270181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:42.543 [2024-07-20 15:51:17.270203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:15:42.543 [2024-07-20 15:51:17.270214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.543 [2024-07-20 15:51:17.270657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.543 [2024-07-20 15:51:17.270672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:42.543 [2024-07-20 15:51:17.270687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:15:42.543 [2024-07-20 15:51:17.270710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.543 [2024-07-20 15:51:17.273302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.543 [2024-07-20 15:51:17.273326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:42.543 [2024-07-20 15:51:17.273340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.566 ms 00:15:42.543 [2024-07-20 15:51:17.273350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.543 [2024-07-20 15:51:17.278340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.543 [2024-07-20 15:51:17.278381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:42.543 [2024-07-20 15:51:17.278396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.936 ms 00:15:42.543 [2024-07-20 15:51:17.278405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.543 [2024-07-20 15:51:17.280147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.543 [2024-07-20 15:51:17.280183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:42.543 [2024-07-20 15:51:17.280200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:15:42.543 [2024-07-20 15:51:17.280210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.543 [2024-07-20 15:51:17.285199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.543 [2024-07-20 15:51:17.285236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:42.543 [2024-07-20 15:51:17.285255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.953 ms 00:15:42.543 [2024-07-20 15:51:17.285266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.543 [2024-07-20 15:51:17.285428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.543 [2024-07-20 15:51:17.285454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:42.543 [2024-07-20 15:51:17.285468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:15:42.543 [2024-07-20 15:51:17.285478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.543 [2024-07-20 15:51:17.287610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.543 [2024-07-20 15:51:17.287643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:42.543 [2024-07-20 15:51:17.287658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:15:42.543 [2024-07-20 15:51:17.287667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.543 [2024-07-20 15:51:17.289259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.543 [2024-07-20 15:51:17.289294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:42.543 [2024-07-20 15:51:17.289311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.552 ms 00:15:42.543 [2024-07-20 15:51:17.289320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.543 [2024-07-20 15:51:17.290527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.543 [2024-07-20 15:51:17.290559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:42.543 [2024-07-20 15:51:17.290576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:15:42.543 [2024-07-20 15:51:17.290585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.543 [2024-07-20 15:51:17.291761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.543 [2024-07-20 15:51:17.291792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:42.543 [2024-07-20 15:51:17.291806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.093 ms 00:15:42.543 [2024-07-20 15:51:17.291815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.543 [2024-07-20 15:51:17.291854] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:42.543 [2024-07-20 15:51:17.291871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.291886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.291896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.291909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.291920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.291936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.291946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.291959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.291969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.291983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.291994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:42.543 [2024-07-20 15:51:17.292970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:42.544 [2024-07-20 15:51:17.292981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:42.544 [2024-07-20 15:51:17.292993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:42.544 [2024-07-20 15:51:17.293004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:42.544 [2024-07-20 15:51:17.293017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:42.544 [2024-07-20 15:51:17.293027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:42.544 [2024-07-20 15:51:17.293040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:42.544 [2024-07-20 15:51:17.293050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:42.544 [2024-07-20 15:51:17.293063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:42.544 [2024-07-20 15:51:17.293074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:42.544 [2024-07-20 15:51:17.293090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:42.544 [2024-07-20 15:51:17.293100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:42.544 [2024-07-20 15:51:17.293113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:42.544 [2024-07-20 15:51:17.293131] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:42.544 [2024-07-20 15:51:17.293159] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0caac309-7e72-4848-ba02-ac86f13fd709 00:15:42.544 [2024-07-20 15:51:17.293172] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:42.544 [2024-07-20 15:51:17.293184] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:42.544 [2024-07-20 15:51:17.293193] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:42.544 [2024-07-20 15:51:17.293207] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:42.544 [2024-07-20 15:51:17.293216] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:42.544 [2024-07-20 15:51:17.293228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:42.544 [2024-07-20 15:51:17.293238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:42.544 [2024-07-20 15:51:17.293249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:42.544 [2024-07-20 15:51:17.293258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:42.544 [2024-07-20 15:51:17.293270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.544 [2024-07-20 15:51:17.293280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:42.544 [2024-07-20 15:51:17.293293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:15:42.544 [2024-07-20 15:51:17.293302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.295155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.544 [2024-07-20 15:51:17.295176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:42.544 [2024-07-20 15:51:17.295193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.822 ms 00:15:42.544 [2024-07-20 15:51:17.295203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.295321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.544 [2024-07-20 15:51:17.295331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:42.544 [2024-07-20 15:51:17.295344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:15:42.544 [2024-07-20 15:51:17.295366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.302464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.544 [2024-07-20 15:51:17.302488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:42.544 [2024-07-20 15:51:17.302503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.544 [2024-07-20 15:51:17.302513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.302575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.544 [2024-07-20 15:51:17.302586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:42.544 [2024-07-20 15:51:17.302600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.544 [2024-07-20 15:51:17.302615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.302719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.544 [2024-07-20 15:51:17.302732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:42.544 [2024-07-20 15:51:17.302748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.544 [2024-07-20 15:51:17.302758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.302790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.544 [2024-07-20 15:51:17.302800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:42.544 [2024-07-20 15:51:17.302812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.544 [2024-07-20 15:51:17.302822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.315430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.544 [2024-07-20 15:51:17.315472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:42.544 [2024-07-20 15:51:17.315488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.544 [2024-07-20 15:51:17.315515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.323848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.544 [2024-07-20 15:51:17.323883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:42.544 [2024-07-20 15:51:17.323899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.544 [2024-07-20 15:51:17.323909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.323995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.544 [2024-07-20 15:51:17.324006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:42.544 [2024-07-20 15:51:17.324037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.544 [2024-07-20 15:51:17.324047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.324108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.544 [2024-07-20 15:51:17.324119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:42.544 [2024-07-20 15:51:17.324131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.544 [2024-07-20 15:51:17.324140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.324237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.544 [2024-07-20 15:51:17.324254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:42.544 [2024-07-20 15:51:17.324288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.544 [2024-07-20 15:51:17.324298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.324374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.544 [2024-07-20 15:51:17.324387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:42.544 [2024-07-20 15:51:17.324417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.544 [2024-07-20 15:51:17.324438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.324502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.544 [2024-07-20 15:51:17.324515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:42.544 [2024-07-20 15:51:17.324531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.544 [2024-07-20 15:51:17.324540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.324594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.544 [2024-07-20 15:51:17.324606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:42.544 [2024-07-20 15:51:17.324619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.544 [2024-07-20 15:51:17.324629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.544 [2024-07-20 15:51:17.324820] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.740 ms, result 0 00:15:42.544 true 00:15:42.802 15:51:17 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 87608 00:15:42.802 15:51:17 ftl.ftl_fio_basic -- common/autotest_common.sh@946 -- # '[' -z 87608 ']' 00:15:42.802 15:51:17 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # kill -0 87608 00:15:42.802 15:51:17 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # uname 00:15:42.802 15:51:17 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:42.802 15:51:17 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87608 00:15:42.802 killing process with pid 87608 00:15:42.802 15:51:17 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:42.802 15:51:17 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:42.802 15:51:17 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87608' 00:15:42.802 15:51:17 ftl.ftl_fio_basic -- common/autotest_common.sh@965 -- # kill 87608 00:15:42.802 15:51:17 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # wait 87608 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:15:45.332 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:45.591 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:45.591 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:45.591 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:15:45.591 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:45.591 15:51:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:45.591 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:45.591 fio-3.35 00:15:45.591 Starting 1 thread 00:15:50.859 00:15:50.859 test: (groupid=0, jobs=1): err= 0: pid=87771: Sat Jul 20 15:51:25 2024 00:15:50.859 read: IOPS=944, BW=62.7MiB/s (65.8MB/s)(255MiB/4058msec) 00:15:50.859 slat (nsec): min=4456, max=81215, avg=6015.61, stdev=2378.80 00:15:50.859 clat (usec): min=322, max=673, avg=487.90, stdev=43.30 00:15:50.859 lat (usec): min=328, max=695, avg=493.92, stdev=43.64 00:15:50.859 clat percentiles (usec): 00:15:50.859 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 449], 20.00th=[ 457], 00:15:50.859 | 30.00th=[ 457], 40.00th=[ 465], 50.00th=[ 482], 60.00th=[ 519], 00:15:50.859 | 70.00th=[ 523], 80.00th=[ 529], 90.00th=[ 529], 95.00th=[ 537], 00:15:50.859 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 635], 99.95th=[ 644], 00:15:50.859 | 99.99th=[ 676] 00:15:50.859 write: IOPS=951, BW=63.2MiB/s (66.2MB/s)(256MiB/4054msec); 0 zone resets 00:15:50.859 slat (usec): min=15, max=105, avg=18.69, stdev= 3.75 00:15:50.859 clat (usec): min=400, max=1055, avg=532.14, stdev=62.54 00:15:50.859 lat (usec): min=417, max=1088, avg=550.83, stdev=63.02 00:15:50.859 clat percentiles (usec): 00:15:50.859 | 1.00th=[ 412], 5.00th=[ 469], 10.00th=[ 474], 20.00th=[ 478], 00:15:50.859 | 30.00th=[ 490], 40.00th=[ 537], 50.00th=[ 537], 60.00th=[ 545], 00:15:50.859 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 603], 95.00th=[ 611], 00:15:50.859 | 99.00th=[ 857], 99.50th=[ 898], 99.90th=[ 963], 99.95th=[ 996], 00:15:50.859 | 99.99th=[ 1057] 00:15:50.859 bw ( KiB/s): min=61336, max=65960, per=100.00%, avg=64770.00, stdev=1602.18, samples=8 00:15:50.859 iops : min= 902, max= 970, avg=952.50, stdev=23.56, samples=8 00:15:50.859 lat (usec) : 500=42.76%, 750=56.44%, 1000=0.78% 00:15:50.859 lat (msec) : 2=0.01% 00:15:50.859 cpu : usr=99.31%, sys=0.07%, ctx=6, majf=0, minf=1326 00:15:50.859 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.859 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.859 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.859 00:15:50.859 Run status group 0 (all jobs): 00:15:50.859 READ: bw=62.7MiB/s (65.8MB/s), 62.7MiB/s-62.7MiB/s (65.8MB/s-65.8MB/s), io=255MiB (267MB), run=4058-4058msec 00:15:50.859 WRITE: bw=63.2MiB/s (66.2MB/s), 63.2MiB/s-63.2MiB/s (66.2MB/s-66.2MB/s), io=256MiB (269MB), run=4054-4054msec 00:15:51.118 ----------------------------------------------------- 00:15:51.118 Suppressions used: 00:15:51.118 count bytes template 00:15:51.118 1 5 /usr/src/fio/parse.c 00:15:51.118 1 8 libtcmalloc_minimal.so 00:15:51.118 1 904 libcrypto.so 00:15:51.118 ----------------------------------------------------- 00:15:51.118 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:51.118 15:51:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:51.375 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:51.375 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:51.375 fio-3.35 00:15:51.375 Starting 2 threads 00:16:17.897 00:16:17.897 first_half: (groupid=0, jobs=1): err= 0: pid=87858: Sat Jul 20 15:51:50 2024 00:16:17.897 read: IOPS=2786, BW=10.9MiB/s (11.4MB/s)(255MiB/23434msec) 00:16:17.897 slat (usec): min=3, max=124, avg= 5.85, stdev= 2.07 00:16:17.897 clat (usec): min=864, max=254908, avg=36492.51, stdev=16858.71 00:16:17.897 lat (usec): min=871, max=254913, avg=36498.36, stdev=16858.93 00:16:17.897 clat percentiles (msec): 00:16:17.897 | 1.00th=[ 12], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:16:17.897 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:16:17.897 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 39], 95.00th=[ 56], 00:16:17.897 | 99.00th=[ 136], 99.50th=[ 155], 99.90th=[ 171], 99.95th=[ 203], 00:16:17.897 | 99.99th=[ 247] 00:16:17.897 write: IOPS=3157, BW=12.3MiB/s (12.9MB/s)(256MiB/20757msec); 0 zone resets 00:16:17.897 slat (usec): min=4, max=675, avg= 7.53, stdev= 6.84 00:16:17.897 clat (usec): min=416, max=83713, avg=9381.07, stdev=15547.28 00:16:17.897 lat (usec): min=422, max=83720, avg=9388.61, stdev=15547.39 00:16:17.897 clat percentiles (usec): 00:16:17.897 | 1.00th=[ 938], 5.00th=[ 1172], 10.00th=[ 1352], 20.00th=[ 1713], 00:16:17.897 | 30.00th=[ 2376], 40.00th=[ 3884], 50.00th=[ 5145], 60.00th=[ 6194], 00:16:17.897 | 70.00th=[ 7111], 80.00th=[10159], 90.00th=[12911], 95.00th=[38011], 00:16:17.897 | 99.00th=[78119], 99.50th=[79168], 99.90th=[81265], 99.95th=[82314], 00:16:17.897 | 99.99th=[83362] 00:16:17.897 bw ( KiB/s): min= 808, max=41800, per=90.20%, avg=22782.83, stdev=12690.49, samples=23 00:16:17.897 iops : min= 202, max=10450, avg=5695.65, stdev=3172.61, samples=23 00:16:17.897 lat (usec) : 500=0.01%, 750=0.09%, 1000=0.74% 00:16:17.897 lat (msec) : 2=12.52%, 4=7.34%, 10=19.54%, 20=6.38%, 50=48.11% 00:16:17.897 lat (msec) : 100=4.33%, 250=0.94%, 500=0.01% 00:16:17.897 cpu : usr=99.25%, sys=0.20%, ctx=39, majf=0, minf=5553 00:16:17.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:17.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.897 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.897 issued rwts: total=65303,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.897 second_half: (groupid=0, jobs=1): err= 0: pid=87859: Sat Jul 20 15:51:50 2024 00:16:17.897 read: IOPS=2774, BW=10.8MiB/s (11.4MB/s)(255MiB/23534msec) 00:16:17.897 slat (nsec): min=3498, max=95955, avg=5794.50, stdev=2137.79 00:16:17.897 clat (usec): min=749, max=259506, avg=36066.89, stdev=18816.92 00:16:17.897 lat (usec): min=757, max=259514, avg=36072.68, stdev=18817.16 00:16:17.897 clat percentiles (msec): 00:16:17.897 | 1.00th=[ 8], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:16:17.897 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:16:17.897 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 49], 00:16:17.897 | 99.00th=[ 148], 99.50th=[ 163], 99.90th=[ 188], 99.95th=[ 213], 00:16:17.897 | 99.99th=[ 253] 00:16:17.897 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(256MiB/18471msec); 0 zone resets 00:16:17.897 slat (usec): min=4, max=722, avg= 7.59, stdev= 5.10 00:16:17.897 clat (usec): min=441, max=84213, avg=9997.66, stdev=16474.93 00:16:17.897 lat (usec): min=448, max=84219, avg=10005.25, stdev=16475.05 00:16:17.897 clat percentiles (usec): 00:16:17.898 | 1.00th=[ 947], 5.00th=[ 1188], 10.00th=[ 1369], 20.00th=[ 1614], 00:16:17.898 | 30.00th=[ 1942], 40.00th=[ 3392], 50.00th=[ 4883], 60.00th=[ 5997], 00:16:17.898 | 70.00th=[ 7177], 80.00th=[10552], 90.00th=[26870], 95.00th=[53740], 00:16:17.898 | 99.00th=[78119], 99.50th=[80217], 99.90th=[82314], 99.95th=[83362], 00:16:17.898 | 99.99th=[83362] 00:16:17.898 bw ( KiB/s): min= 2344, max=55224, per=100.00%, avg=26212.85, stdev=16140.42, samples=20 00:16:17.898 iops : min= 586, max=13806, avg=6553.15, stdev=4035.10, samples=20 00:16:17.898 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.70% 00:16:17.898 lat (msec) : 2=14.79%, 4=6.68%, 10=17.96%, 20=5.75%, 50=49.07% 00:16:17.898 lat (msec) : 100=3.76%, 250=1.19%, 500=0.01% 00:16:17.898 cpu : usr=99.26%, sys=0.16%, ctx=31, majf=0, minf=5587 00:16:17.898 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:17.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.898 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.898 issued rwts: total=65297,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.898 00:16:17.898 Run status group 0 (all jobs): 00:16:17.898 READ: bw=21.7MiB/s (22.7MB/s), 10.8MiB/s-10.9MiB/s (11.4MB/s-11.4MB/s), io=510MiB (535MB), run=23434-23534msec 00:16:17.898 WRITE: bw=24.7MiB/s (25.9MB/s), 12.3MiB/s-13.9MiB/s (12.9MB/s-14.5MB/s), io=512MiB (537MB), run=18471-20757msec 00:16:17.898 ----------------------------------------------------- 00:16:17.898 Suppressions used: 00:16:17.898 count bytes template 00:16:17.898 2 10 /usr/src/fio/parse.c 00:16:17.898 5 480 /usr/src/fio/iolog.c 00:16:17.898 1 8 libtcmalloc_minimal.so 00:16:17.898 1 904 libcrypto.so 00:16:17.898 ----------------------------------------------------- 00:16:17.898 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:17.898 15:51:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:17.898 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:17.898 fio-3.35 00:16:17.898 Starting 1 thread 00:16:32.763 00:16:32.763 test: (groupid=0, jobs=1): err= 0: pid=88159: Sat Jul 20 15:52:05 2024 00:16:32.763 read: IOPS=8017, BW=31.3MiB/s (32.8MB/s)(255MiB/8132msec) 00:16:32.763 slat (nsec): min=3397, max=32458, avg=5001.17, stdev=1699.99 00:16:32.763 clat (usec): min=586, max=31430, avg=15955.21, stdev=815.26 00:16:32.763 lat (usec): min=590, max=31434, avg=15960.22, stdev=815.24 00:16:32.763 clat percentiles (usec): 00:16:32.763 | 1.00th=[15008], 5.00th=[15401], 10.00th=[15533], 20.00th=[15664], 00:16:32.763 | 30.00th=[15664], 40.00th=[15795], 50.00th=[15926], 60.00th=[15926], 00:16:32.763 | 70.00th=[16057], 80.00th=[16188], 90.00th=[16450], 95.00th=[16581], 00:16:32.763 | 99.00th=[17695], 99.50th=[18220], 99.90th=[26608], 99.95th=[27657], 00:16:32.763 | 99.99th=[30802] 00:16:32.763 write: IOPS=14.2k, BW=55.4MiB/s (58.1MB/s)(256MiB/4622msec); 0 zone resets 00:16:32.763 slat (usec): min=4, max=622, avg= 7.28, stdev= 7.89 00:16:32.763 clat (usec): min=571, max=54605, avg=8982.00, stdev=11100.52 00:16:32.763 lat (usec): min=577, max=54613, avg=8989.28, stdev=11100.54 00:16:32.763 clat percentiles (usec): 00:16:32.763 | 1.00th=[ 922], 5.00th=[ 1106], 10.00th=[ 1237], 20.00th=[ 1418], 00:16:32.763 | 30.00th=[ 1582], 40.00th=[ 1909], 50.00th=[ 5866], 60.00th=[ 6718], 00:16:32.763 | 70.00th=[ 7701], 80.00th=[ 9503], 90.00th=[32900], 95.00th=[34341], 00:16:32.763 | 99.00th=[36439], 99.50th=[37487], 99.90th=[50594], 99.95th=[52691], 00:16:32.763 | 99.99th=[54264] 00:16:32.763 bw ( KiB/s): min=10824, max=81096, per=92.44%, avg=52428.80, stdev=18147.51, samples=10 00:16:32.763 iops : min= 2706, max=20274, avg=13107.20, stdev=4536.88, samples=10 00:16:32.763 lat (usec) : 750=0.03%, 1000=1.21% 00:16:32.763 lat (msec) : 2=19.16%, 4=0.74%, 10=20.04%, 20=50.65%, 50=8.11% 00:16:32.763 lat (msec) : 100=0.06% 00:16:32.763 cpu : usr=99.00%, sys=0.32%, ctx=20, majf=0, minf=5577 00:16:32.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:32.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.763 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.763 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.763 00:16:32.763 Run status group 0 (all jobs): 00:16:32.763 READ: bw=31.3MiB/s (32.8MB/s), 31.3MiB/s-31.3MiB/s (32.8MB/s-32.8MB/s), io=255MiB (267MB), run=8132-8132msec 00:16:32.763 WRITE: bw=55.4MiB/s (58.1MB/s), 55.4MiB/s-55.4MiB/s (58.1MB/s-58.1MB/s), io=256MiB (268MB), run=4622-4622msec 00:16:32.763 ----------------------------------------------------- 00:16:32.763 Suppressions used: 00:16:32.763 count bytes template 00:16:32.763 1 5 /usr/src/fio/parse.c 00:16:32.763 2 192 /usr/src/fio/iolog.c 00:16:32.763 1 8 libtcmalloc_minimal.so 00:16:32.763 1 904 libcrypto.so 00:16:32.763 ----------------------------------------------------- 00:16:32.763 00:16:32.763 15:52:06 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:32.763 15:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:32.763 15:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:32.763 15:52:06 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:32.763 Remove shared memory files 00:16:32.763 15:52:06 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:32.763 15:52:06 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:32.763 15:52:06 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:32.763 15:52:06 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:32.763 15:52:06 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid73865 /dev/shm/spdk_tgt_trace.pid86577 00:16:32.763 15:52:06 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:32.763 15:52:06 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:32.763 ************************************ 00:16:32.763 END TEST ftl_fio_basic 00:16:32.763 ************************************ 00:16:32.763 00:16:32.763 real 0m55.665s 00:16:32.763 user 2m2.416s 00:16:32.763 sys 0m3.316s 00:16:32.763 15:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:32.763 15:52:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:32.763 15:52:06 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:32.763 15:52:06 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:32.763 15:52:06 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:32.763 15:52:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:32.763 ************************************ 00:16:32.763 START TEST ftl_bdevperf 00:16:32.763 ************************************ 00:16:32.763 15:52:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:32.763 * Looking for test storage... 00:16:32.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:32.763 15:52:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:32.763 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:32.763 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:32.763 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:32.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=88376 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 88376 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 88376 ']' 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:32.764 15:52:06 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:32.764 [2024-07-20 15:52:06.591778] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:32.764 [2024-07-20 15:52:06.592726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88376 ] 00:16:32.764 [2024-07-20 15:52:06.742464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.764 [2024-07-20 15:52:06.782924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.764 15:52:07 ftl.ftl_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:32.764 15:52:07 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:16:32.764 15:52:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:32.764 15:52:07 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:32.764 15:52:07 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:32.764 15:52:07 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:32.764 15:52:07 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:32.764 15:52:07 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:33.023 15:52:07 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:33.023 15:52:07 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:33.023 15:52:07 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:33.023 15:52:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:16:33.023 15:52:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:33.023 15:52:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:33.023 15:52:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:33.023 15:52:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:33.023 15:52:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:33.023 { 00:16:33.023 "name": "nvme0n1", 00:16:33.023 "aliases": [ 00:16:33.023 "56094b19-6fba-4c9e-9963-41d98d0cb059" 00:16:33.023 ], 00:16:33.023 "product_name": "NVMe disk", 00:16:33.023 "block_size": 4096, 00:16:33.023 "num_blocks": 1310720, 00:16:33.023 "uuid": "56094b19-6fba-4c9e-9963-41d98d0cb059", 00:16:33.023 "assigned_rate_limits": { 00:16:33.023 "rw_ios_per_sec": 0, 00:16:33.023 "rw_mbytes_per_sec": 0, 00:16:33.023 "r_mbytes_per_sec": 0, 00:16:33.023 "w_mbytes_per_sec": 0 00:16:33.023 }, 00:16:33.023 "claimed": true, 00:16:33.023 "claim_type": "read_many_write_one", 00:16:33.023 "zoned": false, 00:16:33.023 "supported_io_types": { 00:16:33.023 "read": true, 00:16:33.023 "write": true, 00:16:33.023 "unmap": true, 00:16:33.023 "write_zeroes": true, 00:16:33.023 "flush": true, 00:16:33.023 "reset": true, 00:16:33.023 "compare": true, 00:16:33.023 "compare_and_write": false, 00:16:33.023 "abort": true, 00:16:33.023 "nvme_admin": true, 00:16:33.023 "nvme_io": true 00:16:33.023 }, 00:16:33.023 "driver_specific": { 00:16:33.023 "nvme": [ 00:16:33.023 { 00:16:33.023 "pci_address": "0000:00:11.0", 00:16:33.023 "trid": { 00:16:33.023 "trtype": "PCIe", 00:16:33.023 "traddr": "0000:00:11.0" 00:16:33.023 }, 00:16:33.023 "ctrlr_data": { 00:16:33.023 "cntlid": 0, 00:16:33.023 "vendor_id": "0x1b36", 00:16:33.023 "model_number": "QEMU NVMe Ctrl", 00:16:33.023 "serial_number": "12341", 00:16:33.023 "firmware_revision": "8.0.0", 00:16:33.023 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:33.023 "oacs": { 00:16:33.023 "security": 0, 00:16:33.023 "format": 1, 00:16:33.023 "firmware": 0, 00:16:33.023 "ns_manage": 1 00:16:33.023 }, 00:16:33.023 "multi_ctrlr": false, 00:16:33.023 "ana_reporting": false 00:16:33.023 }, 00:16:33.023 "vs": { 00:16:33.023 "nvme_version": "1.4" 00:16:33.023 }, 00:16:33.023 "ns_data": { 00:16:33.023 "id": 1, 00:16:33.023 "can_share": false 00:16:33.023 } 00:16:33.023 } 00:16:33.023 ], 00:16:33.023 "mp_policy": "active_passive" 00:16:33.023 } 00:16:33.023 } 00:16:33.023 ]' 00:16:33.023 15:52:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:33.282 15:52:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:33.282 15:52:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:33.282 15:52:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=1310720 00:16:33.282 15:52:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:16:33.282 15:52:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 5120 00:16:33.282 15:52:07 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:33.282 15:52:07 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:33.282 15:52:07 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:33.282 15:52:07 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:33.282 15:52:07 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:33.282 15:52:08 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=a690e30b-dd9d-4689-8630-375f2801af68 00:16:33.282 15:52:08 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:33.282 15:52:08 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a690e30b-dd9d-4689-8630-375f2801af68 00:16:33.540 15:52:08 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:33.799 15:52:08 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=9a3860a3-e07f-4295-aeef-91c95ea9e1e0 00:16:33.799 15:52:08 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9a3860a3-e07f-4295-aeef-91c95ea9e1e0 00:16:34.057 15:52:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=7038713c-d2b9-4930-a78f-abe347472d42 00:16:34.057 15:52:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7038713c-d2b9-4930-a78f-abe347472d42 00:16:34.057 15:52:08 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:34.057 15:52:08 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:34.057 15:52:08 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=7038713c-d2b9-4930-a78f-abe347472d42 00:16:34.057 15:52:08 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:34.057 15:52:08 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 7038713c-d2b9-4930-a78f-abe347472d42 00:16:34.057 15:52:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=7038713c-d2b9-4930-a78f-abe347472d42 00:16:34.057 15:52:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:34.057 15:52:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:34.057 15:52:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:34.057 15:52:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7038713c-d2b9-4930-a78f-abe347472d42 00:16:34.057 15:52:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:34.057 { 00:16:34.057 "name": "7038713c-d2b9-4930-a78f-abe347472d42", 00:16:34.057 "aliases": [ 00:16:34.057 "lvs/nvme0n1p0" 00:16:34.057 ], 00:16:34.057 "product_name": "Logical Volume", 00:16:34.057 "block_size": 4096, 00:16:34.057 "num_blocks": 26476544, 00:16:34.057 "uuid": "7038713c-d2b9-4930-a78f-abe347472d42", 00:16:34.057 "assigned_rate_limits": { 00:16:34.057 "rw_ios_per_sec": 0, 00:16:34.057 "rw_mbytes_per_sec": 0, 00:16:34.057 "r_mbytes_per_sec": 0, 00:16:34.057 "w_mbytes_per_sec": 0 00:16:34.057 }, 00:16:34.057 "claimed": false, 00:16:34.057 "zoned": false, 00:16:34.057 "supported_io_types": { 00:16:34.057 "read": true, 00:16:34.057 "write": true, 00:16:34.057 "unmap": true, 00:16:34.058 "write_zeroes": true, 00:16:34.058 "flush": false, 00:16:34.058 "reset": true, 00:16:34.058 "compare": false, 00:16:34.058 "compare_and_write": false, 00:16:34.058 "abort": false, 00:16:34.058 "nvme_admin": false, 00:16:34.058 "nvme_io": false 00:16:34.058 }, 00:16:34.058 "driver_specific": { 00:16:34.058 "lvol": { 00:16:34.058 "lvol_store_uuid": "9a3860a3-e07f-4295-aeef-91c95ea9e1e0", 00:16:34.058 "base_bdev": "nvme0n1", 00:16:34.058 "thin_provision": true, 00:16:34.058 "num_allocated_clusters": 0, 00:16:34.058 "snapshot": false, 00:16:34.058 "clone": false, 00:16:34.058 "esnap_clone": false 00:16:34.058 } 00:16:34.058 } 00:16:34.058 } 00:16:34.058 ]' 00:16:34.058 15:52:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:34.058 15:52:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:34.058 15:52:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:34.316 15:52:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:34.316 15:52:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:34.316 15:52:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:16:34.316 15:52:08 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:34.316 15:52:08 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:34.316 15:52:08 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:34.316 15:52:09 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:34.316 15:52:09 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:34.316 15:52:09 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 7038713c-d2b9-4930-a78f-abe347472d42 00:16:34.316 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=7038713c-d2b9-4930-a78f-abe347472d42 00:16:34.316 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:34.316 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:34.316 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:34.316 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7038713c-d2b9-4930-a78f-abe347472d42 00:16:34.575 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:34.575 { 00:16:34.575 "name": "7038713c-d2b9-4930-a78f-abe347472d42", 00:16:34.575 "aliases": [ 00:16:34.575 "lvs/nvme0n1p0" 00:16:34.575 ], 00:16:34.575 "product_name": "Logical Volume", 00:16:34.575 "block_size": 4096, 00:16:34.575 "num_blocks": 26476544, 00:16:34.575 "uuid": "7038713c-d2b9-4930-a78f-abe347472d42", 00:16:34.575 "assigned_rate_limits": { 00:16:34.575 "rw_ios_per_sec": 0, 00:16:34.575 "rw_mbytes_per_sec": 0, 00:16:34.575 "r_mbytes_per_sec": 0, 00:16:34.575 "w_mbytes_per_sec": 0 00:16:34.575 }, 00:16:34.575 "claimed": false, 00:16:34.575 "zoned": false, 00:16:34.575 "supported_io_types": { 00:16:34.575 "read": true, 00:16:34.575 "write": true, 00:16:34.575 "unmap": true, 00:16:34.575 "write_zeroes": true, 00:16:34.575 "flush": false, 00:16:34.575 "reset": true, 00:16:34.575 "compare": false, 00:16:34.575 "compare_and_write": false, 00:16:34.575 "abort": false, 00:16:34.575 "nvme_admin": false, 00:16:34.575 "nvme_io": false 00:16:34.575 }, 00:16:34.575 "driver_specific": { 00:16:34.575 "lvol": { 00:16:34.575 "lvol_store_uuid": "9a3860a3-e07f-4295-aeef-91c95ea9e1e0", 00:16:34.575 "base_bdev": "nvme0n1", 00:16:34.575 "thin_provision": true, 00:16:34.575 "num_allocated_clusters": 0, 00:16:34.575 "snapshot": false, 00:16:34.575 "clone": false, 00:16:34.575 "esnap_clone": false 00:16:34.575 } 00:16:34.575 } 00:16:34.575 } 00:16:34.575 ]' 00:16:34.575 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:34.575 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:34.575 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:34.834 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:34.834 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:34.834 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:16:34.834 15:52:09 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:34.834 15:52:09 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:34.834 15:52:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:16:34.834 15:52:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 7038713c-d2b9-4930-a78f-abe347472d42 00:16:34.834 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=7038713c-d2b9-4930-a78f-abe347472d42 00:16:34.834 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:34.834 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:34.834 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:34.834 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7038713c-d2b9-4930-a78f-abe347472d42 00:16:35.093 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:35.093 { 00:16:35.093 "name": "7038713c-d2b9-4930-a78f-abe347472d42", 00:16:35.093 "aliases": [ 00:16:35.093 "lvs/nvme0n1p0" 00:16:35.093 ], 00:16:35.093 "product_name": "Logical Volume", 00:16:35.093 "block_size": 4096, 00:16:35.093 "num_blocks": 26476544, 00:16:35.093 "uuid": "7038713c-d2b9-4930-a78f-abe347472d42", 00:16:35.093 "assigned_rate_limits": { 00:16:35.093 "rw_ios_per_sec": 0, 00:16:35.093 "rw_mbytes_per_sec": 0, 00:16:35.093 "r_mbytes_per_sec": 0, 00:16:35.093 "w_mbytes_per_sec": 0 00:16:35.093 }, 00:16:35.093 "claimed": false, 00:16:35.093 "zoned": false, 00:16:35.093 "supported_io_types": { 00:16:35.093 "read": true, 00:16:35.093 "write": true, 00:16:35.093 "unmap": true, 00:16:35.093 "write_zeroes": true, 00:16:35.093 "flush": false, 00:16:35.093 "reset": true, 00:16:35.093 "compare": false, 00:16:35.093 "compare_and_write": false, 00:16:35.093 "abort": false, 00:16:35.093 "nvme_admin": false, 00:16:35.093 "nvme_io": false 00:16:35.093 }, 00:16:35.093 "driver_specific": { 00:16:35.093 "lvol": { 00:16:35.093 "lvol_store_uuid": "9a3860a3-e07f-4295-aeef-91c95ea9e1e0", 00:16:35.093 "base_bdev": "nvme0n1", 00:16:35.093 "thin_provision": true, 00:16:35.093 "num_allocated_clusters": 0, 00:16:35.093 "snapshot": false, 00:16:35.093 "clone": false, 00:16:35.093 "esnap_clone": false 00:16:35.093 } 00:16:35.093 } 00:16:35.093 } 00:16:35.093 ]' 00:16:35.093 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:35.093 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:35.093 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:35.093 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:35.093 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:35.093 15:52:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:16:35.093 15:52:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:16:35.093 15:52:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7038713c-d2b9-4930-a78f-abe347472d42 -c nvc0n1p0 --l2p_dram_limit 20 00:16:35.353 [2024-07-20 15:52:09.978584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.353 [2024-07-20 15:52:09.978653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:35.353 [2024-07-20 15:52:09.978671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:35.353 [2024-07-20 15:52:09.978683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.353 [2024-07-20 15:52:09.978748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.353 [2024-07-20 15:52:09.978771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:35.353 [2024-07-20 15:52:09.978782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:35.353 [2024-07-20 15:52:09.978801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.354 [2024-07-20 15:52:09.978833] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:35.354 [2024-07-20 15:52:09.979152] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:35.354 [2024-07-20 15:52:09.979180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.354 [2024-07-20 15:52:09.979196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:35.354 [2024-07-20 15:52:09.979208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:16:35.354 [2024-07-20 15:52:09.979220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.354 [2024-07-20 15:52:09.979311] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9d41ce85-b7cc-46f4-beef-21edc5a90b25 00:16:35.354 [2024-07-20 15:52:09.980692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.354 [2024-07-20 15:52:09.980724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:35.354 [2024-07-20 15:52:09.980740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:35.354 [2024-07-20 15:52:09.980753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.354 [2024-07-20 15:52:09.988143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.354 [2024-07-20 15:52:09.988169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:35.354 [2024-07-20 15:52:09.988184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.365 ms 00:16:35.354 [2024-07-20 15:52:09.988193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.354 [2024-07-20 15:52:09.988299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.354 [2024-07-20 15:52:09.988313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:35.354 [2024-07-20 15:52:09.988327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:35.354 [2024-07-20 15:52:09.988336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.354 [2024-07-20 15:52:09.988415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.354 [2024-07-20 15:52:09.988443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:35.354 [2024-07-20 15:52:09.988456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:35.354 [2024-07-20 15:52:09.988466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.354 [2024-07-20 15:52:09.988490] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:35.354 [2024-07-20 15:52:09.990288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.354 [2024-07-20 15:52:09.990351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:35.354 [2024-07-20 15:52:09.990372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.810 ms 00:16:35.354 [2024-07-20 15:52:09.990384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.354 [2024-07-20 15:52:09.990424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.354 [2024-07-20 15:52:09.990438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:35.354 [2024-07-20 15:52:09.990448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:35.354 [2024-07-20 15:52:09.990463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.354 [2024-07-20 15:52:09.990479] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:35.354 [2024-07-20 15:52:09.990608] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:35.354 [2024-07-20 15:52:09.990622] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:35.354 [2024-07-20 15:52:09.990637] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:35.354 [2024-07-20 15:52:09.990650] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:35.354 [2024-07-20 15:52:09.990664] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:35.354 [2024-07-20 15:52:09.990675] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:35.354 [2024-07-20 15:52:09.990702] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:35.354 [2024-07-20 15:52:09.990714] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:35.354 [2024-07-20 15:52:09.990726] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:35.354 [2024-07-20 15:52:09.990742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.354 [2024-07-20 15:52:09.990755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:35.354 [2024-07-20 15:52:09.990765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:16:35.354 [2024-07-20 15:52:09.990777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.354 [2024-07-20 15:52:09.990845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.354 [2024-07-20 15:52:09.990862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:35.354 [2024-07-20 15:52:09.990872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:35.354 [2024-07-20 15:52:09.990890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.354 [2024-07-20 15:52:09.990977] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:35.354 [2024-07-20 15:52:09.990993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:35.354 [2024-07-20 15:52:09.991004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:35.354 [2024-07-20 15:52:09.991016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.354 [2024-07-20 15:52:09.991029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:35.354 [2024-07-20 15:52:09.991041] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:35.354 [2024-07-20 15:52:09.991050] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:35.354 [2024-07-20 15:52:09.991061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:35.354 [2024-07-20 15:52:09.991071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:35.354 [2024-07-20 15:52:09.991082] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:35.354 [2024-07-20 15:52:09.991091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:35.354 [2024-07-20 15:52:09.991103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:35.354 [2024-07-20 15:52:09.991111] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:35.354 [2024-07-20 15:52:09.991126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:35.354 [2024-07-20 15:52:09.991135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:35.354 [2024-07-20 15:52:09.991147] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.354 [2024-07-20 15:52:09.991157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:35.354 [2024-07-20 15:52:09.991169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:35.354 [2024-07-20 15:52:09.991178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.354 [2024-07-20 15:52:09.991191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:35.354 [2024-07-20 15:52:09.991203] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:35.354 [2024-07-20 15:52:09.991215] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:35.354 [2024-07-20 15:52:09.991224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:35.354 [2024-07-20 15:52:09.991236] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:35.354 [2024-07-20 15:52:09.991245] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:35.354 [2024-07-20 15:52:09.991256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:35.354 [2024-07-20 15:52:09.991266] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:35.354 [2024-07-20 15:52:09.991277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:35.354 [2024-07-20 15:52:09.991286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:35.354 [2024-07-20 15:52:09.991300] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:35.354 [2024-07-20 15:52:09.991309] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:35.354 [2024-07-20 15:52:09.991320] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:35.354 [2024-07-20 15:52:09.991329] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:35.354 [2024-07-20 15:52:09.991340] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:35.354 [2024-07-20 15:52:09.991349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:35.354 [2024-07-20 15:52:09.991372] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:35.354 [2024-07-20 15:52:09.991383] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:35.354 [2024-07-20 15:52:09.991395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:35.354 [2024-07-20 15:52:09.991404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:35.354 [2024-07-20 15:52:09.991415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.354 [2024-07-20 15:52:09.991424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:35.354 [2024-07-20 15:52:09.991436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:35.354 [2024-07-20 15:52:09.991445] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.354 [2024-07-20 15:52:09.991457] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:35.354 [2024-07-20 15:52:09.991467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:35.354 [2024-07-20 15:52:09.991481] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:35.354 [2024-07-20 15:52:09.991491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:35.354 [2024-07-20 15:52:09.991503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:35.354 [2024-07-20 15:52:09.991513] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:35.354 [2024-07-20 15:52:09.991524] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:35.354 [2024-07-20 15:52:09.991533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:35.354 [2024-07-20 15:52:09.991544] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:35.354 [2024-07-20 15:52:09.991556] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:35.354 [2024-07-20 15:52:09.991572] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:35.354 [2024-07-20 15:52:09.991585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:35.354 [2024-07-20 15:52:09.991601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:35.354 [2024-07-20 15:52:09.991612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:35.354 [2024-07-20 15:52:09.991625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:35.354 [2024-07-20 15:52:09.991635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:35.354 [2024-07-20 15:52:09.991647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:35.354 [2024-07-20 15:52:09.991657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:35.355 [2024-07-20 15:52:09.991672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:35.355 [2024-07-20 15:52:09.991683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:35.355 [2024-07-20 15:52:09.991696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:35.355 [2024-07-20 15:52:09.991706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:35.355 [2024-07-20 15:52:09.991719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:35.355 [2024-07-20 15:52:09.991729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:35.355 [2024-07-20 15:52:09.991741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:35.355 [2024-07-20 15:52:09.991754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:35.355 [2024-07-20 15:52:09.991767] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:35.355 [2024-07-20 15:52:09.991777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:35.355 [2024-07-20 15:52:09.991801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:35.355 [2024-07-20 15:52:09.991811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:35.355 [2024-07-20 15:52:09.991829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:35.355 [2024-07-20 15:52:09.991840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:35.355 [2024-07-20 15:52:09.991853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.355 [2024-07-20 15:52:09.991870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:35.355 [2024-07-20 15:52:09.991885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:16:35.355 [2024-07-20 15:52:09.991895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.355 [2024-07-20 15:52:09.991932] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:35.355 [2024-07-20 15:52:09.991945] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:38.637 [2024-07-20 15:52:13.371143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.637 [2024-07-20 15:52:13.371201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:38.637 [2024-07-20 15:52:13.371221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3384.691 ms 00:16:38.637 [2024-07-20 15:52:13.371232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.637 [2024-07-20 15:52:13.393051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.637 [2024-07-20 15:52:13.393158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:38.637 [2024-07-20 15:52:13.393214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.737 ms 00:16:38.637 [2024-07-20 15:52:13.393248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.637 [2024-07-20 15:52:13.393597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.637 [2024-07-20 15:52:13.393663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:38.637 [2024-07-20 15:52:13.393707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:16:38.637 [2024-07-20 15:52:13.393740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.637 [2024-07-20 15:52:13.411535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.637 [2024-07-20 15:52:13.411597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:38.637 [2024-07-20 15:52:13.411630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.687 ms 00:16:38.637 [2024-07-20 15:52:13.411652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.637 [2024-07-20 15:52:13.411731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.637 [2024-07-20 15:52:13.411758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:38.637 [2024-07-20 15:52:13.411785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:38.637 [2024-07-20 15:52:13.411806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.637 [2024-07-20 15:52:13.412439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.637 [2024-07-20 15:52:13.412478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:38.637 [2024-07-20 15:52:13.412506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:16:38.637 [2024-07-20 15:52:13.412526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.637 [2024-07-20 15:52:13.412743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.637 [2024-07-20 15:52:13.412793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:38.637 [2024-07-20 15:52:13.412824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:16:38.637 [2024-07-20 15:52:13.412845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.637 [2024-07-20 15:52:13.420131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.637 [2024-07-20 15:52:13.420172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:38.637 [2024-07-20 15:52:13.420202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.260 ms 00:16:38.637 [2024-07-20 15:52:13.420217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.637 [2024-07-20 15:52:13.428941] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:38.896 [2024-07-20 15:52:13.434840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.896 [2024-07-20 15:52:13.434874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:38.896 [2024-07-20 15:52:13.434886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.554 ms 00:16:38.896 [2024-07-20 15:52:13.434899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.896 [2024-07-20 15:52:13.506593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.896 [2024-07-20 15:52:13.506659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:38.896 [2024-07-20 15:52:13.506676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.783 ms 00:16:38.896 [2024-07-20 15:52:13.506695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.896 [2024-07-20 15:52:13.506880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.896 [2024-07-20 15:52:13.506897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:38.896 [2024-07-20 15:52:13.506908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:16:38.896 [2024-07-20 15:52:13.506920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.896 [2024-07-20 15:52:13.510499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.896 [2024-07-20 15:52:13.510548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:38.896 [2024-07-20 15:52:13.510561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.565 ms 00:16:38.896 [2024-07-20 15:52:13.510576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.896 [2024-07-20 15:52:13.513409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.896 [2024-07-20 15:52:13.513446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:38.896 [2024-07-20 15:52:13.513475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.800 ms 00:16:38.896 [2024-07-20 15:52:13.513487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.896 [2024-07-20 15:52:13.513751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.896 [2024-07-20 15:52:13.513770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:38.896 [2024-07-20 15:52:13.513782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:16:38.896 [2024-07-20 15:52:13.513804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.896 [2024-07-20 15:52:13.551506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.896 [2024-07-20 15:52:13.551551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:38.896 [2024-07-20 15:52:13.551565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.744 ms 00:16:38.896 [2024-07-20 15:52:13.551582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.896 [2024-07-20 15:52:13.556056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.896 [2024-07-20 15:52:13.556096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:38.896 [2024-07-20 15:52:13.556109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.437 ms 00:16:38.896 [2024-07-20 15:52:13.556121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.896 [2024-07-20 15:52:13.559175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.896 [2024-07-20 15:52:13.559213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:38.896 [2024-07-20 15:52:13.559225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.018 ms 00:16:38.896 [2024-07-20 15:52:13.559238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.896 [2024-07-20 15:52:13.562859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.896 [2024-07-20 15:52:13.562898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:38.896 [2024-07-20 15:52:13.562910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.595 ms 00:16:38.896 [2024-07-20 15:52:13.562925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.896 [2024-07-20 15:52:13.562964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.896 [2024-07-20 15:52:13.562978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:38.896 [2024-07-20 15:52:13.562990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:38.896 [2024-07-20 15:52:13.563003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.896 [2024-07-20 15:52:13.563063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:38.896 [2024-07-20 15:52:13.563077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:38.896 [2024-07-20 15:52:13.563088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:38.896 [2024-07-20 15:52:13.563099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:38.896 [2024-07-20 15:52:13.564132] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3590.984 ms, result 0 00:16:38.896 { 00:16:38.896 "name": "ftl0", 00:16:38.896 "uuid": "9d41ce85-b7cc-46f4-beef-21edc5a90b25" 00:16:38.896 } 00:16:38.896 15:52:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:38.896 15:52:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:16:38.896 15:52:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:16:39.154 15:52:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:39.154 [2024-07-20 15:52:13.860466] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:39.154 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:39.154 Zero copy mechanism will not be used. 00:16:39.154 Running I/O for 4 seconds... 00:16:43.342 00:16:43.342 Latency(us) 00:16:43.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.342 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:43.342 ftl0 : 4.00 1563.08 103.80 0.00 0.00 672.68 243.46 881.71 00:16:43.342 =================================================================================================================== 00:16:43.342 Total : 1563.08 103.80 0.00 0.00 672.68 243.46 881.71 00:16:43.342 0 00:16:43.342 [2024-07-20 15:52:17.860421] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:43.342 15:52:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:43.342 [2024-07-20 15:52:17.963406] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:43.342 Running I/O for 4 seconds... 00:16:47.599 00:16:47.599 Latency(us) 00:16:47.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.599 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.599 ftl0 : 4.01 11960.59 46.72 0.00 0.00 10682.17 200.69 24635.22 00:16:47.599 =================================================================================================================== 00:16:47.599 Total : 11960.59 46.72 0.00 0.00 10682.17 0.00 24635.22 00:16:47.600 [2024-07-20 15:52:21.975393] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:47.600 0 00:16:47.600 15:52:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:47.600 [2024-07-20 15:52:22.087377] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:47.600 Running I/O for 4 seconds... 00:16:51.782 00:16:51.782 Latency(us) 00:16:51.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.782 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:51.782 Verification LBA range: start 0x0 length 0x1400000 00:16:51.782 ftl0 : 4.01 9476.64 37.02 0.00 0.00 13465.80 245.10 17686.82 00:16:51.782 =================================================================================================================== 00:16:51.782 Total : 9476.64 37.02 0.00 0.00 13465.80 0.00 17686.82 00:16:51.782 [2024-07-20 15:52:26.095206] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:51.782 0 00:16:51.782 15:52:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:51.782 [2024-07-20 15:52:26.284888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.782 [2024-07-20 15:52:26.284937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:51.782 [2024-07-20 15:52:26.284953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:51.782 [2024-07-20 15:52:26.284974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.782 [2024-07-20 15:52:26.285001] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:51.782 [2024-07-20 15:52:26.285665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.782 [2024-07-20 15:52:26.285681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:51.782 [2024-07-20 15:52:26.285695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:16:51.782 [2024-07-20 15:52:26.285705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.782 [2024-07-20 15:52:26.287668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.782 [2024-07-20 15:52:26.287703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:51.782 [2024-07-20 15:52:26.287719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.938 ms 00:16:51.782 [2024-07-20 15:52:26.287730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.782 [2024-07-20 15:52:26.486620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.782 [2024-07-20 15:52:26.486666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:51.783 [2024-07-20 15:52:26.486688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 199.181 ms 00:16:51.783 [2024-07-20 15:52:26.486699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.783 [2024-07-20 15:52:26.491749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.783 [2024-07-20 15:52:26.491778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:51.783 [2024-07-20 15:52:26.491792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.019 ms 00:16:51.783 [2024-07-20 15:52:26.491802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.783 [2024-07-20 15:52:26.493550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.783 [2024-07-20 15:52:26.493583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:51.783 [2024-07-20 15:52:26.493597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.690 ms 00:16:51.783 [2024-07-20 15:52:26.493607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.783 [2024-07-20 15:52:26.498152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.783 [2024-07-20 15:52:26.498187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:51.783 [2024-07-20 15:52:26.498227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.507 ms 00:16:51.783 [2024-07-20 15:52:26.498237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.783 [2024-07-20 15:52:26.498364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.783 [2024-07-20 15:52:26.498377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:51.783 [2024-07-20 15:52:26.498399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:16:51.783 [2024-07-20 15:52:26.498408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.783 [2024-07-20 15:52:26.500452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.783 [2024-07-20 15:52:26.500483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:51.783 [2024-07-20 15:52:26.500497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.026 ms 00:16:51.783 [2024-07-20 15:52:26.500506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.783 [2024-07-20 15:52:26.502172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.783 [2024-07-20 15:52:26.502205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:51.783 [2024-07-20 15:52:26.502219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.621 ms 00:16:51.783 [2024-07-20 15:52:26.502228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.783 [2024-07-20 15:52:26.503381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.783 [2024-07-20 15:52:26.503409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:51.783 [2024-07-20 15:52:26.503423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:16:51.783 [2024-07-20 15:52:26.503432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.783 [2024-07-20 15:52:26.504654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.783 [2024-07-20 15:52:26.504684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:51.783 [2024-07-20 15:52:26.504698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:16:51.783 [2024-07-20 15:52:26.504707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.783 [2024-07-20 15:52:26.504739] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:51.783 [2024-07-20 15:52:26.504758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.504999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:51.783 [2024-07-20 15:52:26.505981] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:51.783 [2024-07-20 15:52:26.505992] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d41ce85-b7cc-46f4-beef-21edc5a90b25 00:16:51.783 [2024-07-20 15:52:26.506003] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:51.783 [2024-07-20 15:52:26.506014] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:51.783 [2024-07-20 15:52:26.506023] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:51.783 [2024-07-20 15:52:26.506036] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:51.783 [2024-07-20 15:52:26.506045] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:51.783 [2024-07-20 15:52:26.506059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:51.783 [2024-07-20 15:52:26.506068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:51.784 [2024-07-20 15:52:26.506079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:51.784 [2024-07-20 15:52:26.506087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:51.784 [2024-07-20 15:52:26.506098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.784 [2024-07-20 15:52:26.506108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:51.784 [2024-07-20 15:52:26.506122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:16:51.784 [2024-07-20 15:52:26.506132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.507889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.784 [2024-07-20 15:52:26.507907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:51.784 [2024-07-20 15:52:26.507920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.727 ms 00:16:51.784 [2024-07-20 15:52:26.507930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.508048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.784 [2024-07-20 15:52:26.508062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:51.784 [2024-07-20 15:52:26.508075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:51.784 [2024-07-20 15:52:26.508085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.514248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.784 [2024-07-20 15:52:26.514277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:51.784 [2024-07-20 15:52:26.514291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.784 [2024-07-20 15:52:26.514316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.514368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.784 [2024-07-20 15:52:26.514393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:51.784 [2024-07-20 15:52:26.514416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.784 [2024-07-20 15:52:26.514426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.514481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.784 [2024-07-20 15:52:26.514492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:51.784 [2024-07-20 15:52:26.514505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.784 [2024-07-20 15:52:26.514521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.514539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.784 [2024-07-20 15:52:26.514555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:51.784 [2024-07-20 15:52:26.514571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.784 [2024-07-20 15:52:26.514581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.525635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.784 [2024-07-20 15:52:26.525673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:51.784 [2024-07-20 15:52:26.525688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.784 [2024-07-20 15:52:26.525698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.533780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.784 [2024-07-20 15:52:26.533812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:51.784 [2024-07-20 15:52:26.533829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.784 [2024-07-20 15:52:26.533839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.533913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.784 [2024-07-20 15:52:26.533924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:51.784 [2024-07-20 15:52:26.533937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.784 [2024-07-20 15:52:26.533946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.533979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.784 [2024-07-20 15:52:26.533990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:51.784 [2024-07-20 15:52:26.534002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.784 [2024-07-20 15:52:26.534014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.534126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.784 [2024-07-20 15:52:26.534139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:51.784 [2024-07-20 15:52:26.534152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.784 [2024-07-20 15:52:26.534162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.534198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.784 [2024-07-20 15:52:26.534210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:51.784 [2024-07-20 15:52:26.534231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.784 [2024-07-20 15:52:26.534241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.534297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.784 [2024-07-20 15:52:26.534308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:51.784 [2024-07-20 15:52:26.534320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.784 [2024-07-20 15:52:26.534330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.534444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:51.784 [2024-07-20 15:52:26.534457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:51.784 [2024-07-20 15:52:26.534470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:51.784 [2024-07-20 15:52:26.534482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.784 [2024-07-20 15:52:26.534611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 250.086 ms, result 0 00:16:51.784 true 00:16:51.784 15:52:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 88376 00:16:51.784 15:52:26 ftl.ftl_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 88376 ']' 00:16:51.784 15:52:26 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # kill -0 88376 00:16:51.784 15:52:26 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # uname 00:16:51.784 15:52:26 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:51.784 15:52:26 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88376 00:16:52.044 15:52:26 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:52.044 15:52:26 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:52.044 killing process with pid 88376 00:16:52.044 15:52:26 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88376' 00:16:52.044 15:52:26 ftl.ftl_bdevperf -- common/autotest_common.sh@965 -- # kill 88376 00:16:52.044 Received shutdown signal, test time was about 4.000000 seconds 00:16:52.044 00:16:52.044 Latency(us) 00:16:52.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.044 =================================================================================================================== 00:16:52.044 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:52.044 15:52:26 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # wait 88376 00:16:52.303 15:52:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:16:52.303 15:52:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:52.303 15:52:26 ftl.ftl_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:52.303 15:52:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:52.303 15:52:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:16:52.303 Remove shared memory files 00:16:52.303 15:52:26 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:52.303 15:52:26 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:52.303 15:52:26 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:52.303 15:52:26 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:52.303 15:52:26 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:52.303 15:52:26 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:52.303 00:16:52.303 real 0m20.618s 00:16:52.303 user 0m22.808s 00:16:52.303 sys 0m1.124s 00:16:52.303 15:52:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:52.303 15:52:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:52.303 ************************************ 00:16:52.303 END TEST ftl_bdevperf 00:16:52.303 ************************************ 00:16:52.303 15:52:27 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:52.303 15:52:27 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:52.303 15:52:27 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:52.303 15:52:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:52.303 ************************************ 00:16:52.303 START TEST ftl_trim 00:16:52.303 ************************************ 00:16:52.303 15:52:27 ftl.ftl_trim -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:52.562 * Looking for test storage... 00:16:52.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=88716 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 88716 00:16:52.562 15:52:27 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:52.562 15:52:27 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 88716 ']' 00:16:52.562 15:52:27 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.562 15:52:27 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:52.562 15:52:27 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.562 15:52:27 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:52.562 15:52:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:52.562 [2024-07-20 15:52:27.280211] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:52.563 [2024-07-20 15:52:27.280377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88716 ] 00:16:52.821 [2024-07-20 15:52:27.430233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:52.821 [2024-07-20 15:52:27.476177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.821 [2024-07-20 15:52:27.476263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.821 [2024-07-20 15:52:27.476409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.388 15:52:28 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:53.388 15:52:28 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:16:53.388 15:52:28 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:53.388 15:52:28 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:53.388 15:52:28 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:53.388 15:52:28 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:53.388 15:52:28 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:53.388 15:52:28 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:53.646 15:52:28 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:53.646 15:52:28 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:53.646 15:52:28 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:53.646 15:52:28 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:16:53.646 15:52:28 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:53.646 15:52:28 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:53.646 15:52:28 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:53.646 15:52:28 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:53.905 15:52:28 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:53.905 { 00:16:53.905 "name": "nvme0n1", 00:16:53.905 "aliases": [ 00:16:53.905 "045ab72a-8932-466f-abe2-289caf570365" 00:16:53.905 ], 00:16:53.905 "product_name": "NVMe disk", 00:16:53.905 "block_size": 4096, 00:16:53.905 "num_blocks": 1310720, 00:16:53.905 "uuid": "045ab72a-8932-466f-abe2-289caf570365", 00:16:53.905 "assigned_rate_limits": { 00:16:53.905 "rw_ios_per_sec": 0, 00:16:53.905 "rw_mbytes_per_sec": 0, 00:16:53.905 "r_mbytes_per_sec": 0, 00:16:53.905 "w_mbytes_per_sec": 0 00:16:53.905 }, 00:16:53.905 "claimed": true, 00:16:53.905 "claim_type": "read_many_write_one", 00:16:53.905 "zoned": false, 00:16:53.905 "supported_io_types": { 00:16:53.905 "read": true, 00:16:53.905 "write": true, 00:16:53.905 "unmap": true, 00:16:53.905 "write_zeroes": true, 00:16:53.905 "flush": true, 00:16:53.905 "reset": true, 00:16:53.905 "compare": true, 00:16:53.905 "compare_and_write": false, 00:16:53.905 "abort": true, 00:16:53.905 "nvme_admin": true, 00:16:53.905 "nvme_io": true 00:16:53.905 }, 00:16:53.905 "driver_specific": { 00:16:53.905 "nvme": [ 00:16:53.905 { 00:16:53.905 "pci_address": "0000:00:11.0", 00:16:53.905 "trid": { 00:16:53.905 "trtype": "PCIe", 00:16:53.905 "traddr": "0000:00:11.0" 00:16:53.905 }, 00:16:53.905 "ctrlr_data": { 00:16:53.905 "cntlid": 0, 00:16:53.905 "vendor_id": "0x1b36", 00:16:53.905 "model_number": "QEMU NVMe Ctrl", 00:16:53.905 "serial_number": "12341", 00:16:53.905 "firmware_revision": "8.0.0", 00:16:53.905 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:53.905 "oacs": { 00:16:53.905 "security": 0, 00:16:53.905 "format": 1, 00:16:53.905 "firmware": 0, 00:16:53.905 "ns_manage": 1 00:16:53.905 }, 00:16:53.905 "multi_ctrlr": false, 00:16:53.905 "ana_reporting": false 00:16:53.905 }, 00:16:53.905 "vs": { 00:16:53.905 "nvme_version": "1.4" 00:16:53.905 }, 00:16:53.905 "ns_data": { 00:16:53.905 "id": 1, 00:16:53.905 "can_share": false 00:16:53.905 } 00:16:53.905 } 00:16:53.905 ], 00:16:53.905 "mp_policy": "active_passive" 00:16:53.905 } 00:16:53.905 } 00:16:53.905 ]' 00:16:53.905 15:52:28 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:53.905 15:52:28 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:53.905 15:52:28 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:53.905 15:52:28 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=1310720 00:16:53.905 15:52:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:16:53.905 15:52:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 5120 00:16:53.905 15:52:28 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:53.905 15:52:28 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:53.905 15:52:28 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:53.905 15:52:28 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:53.905 15:52:28 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:54.164 15:52:28 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=9a3860a3-e07f-4295-aeef-91c95ea9e1e0 00:16:54.164 15:52:28 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:54.164 15:52:28 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9a3860a3-e07f-4295-aeef-91c95ea9e1e0 00:16:54.422 15:52:28 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:54.422 15:52:29 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=77f7c260-4873-4959-882c-5afc0f96e76f 00:16:54.422 15:52:29 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 77f7c260-4873-4959-882c-5afc0f96e76f 00:16:54.680 15:52:29 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=e9db9dcb-7880-4d25-9fc1-045a947afcc6 00:16:54.680 15:52:29 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e9db9dcb-7880-4d25-9fc1-045a947afcc6 00:16:54.680 15:52:29 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:54.680 15:52:29 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:54.680 15:52:29 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=e9db9dcb-7880-4d25-9fc1-045a947afcc6 00:16:54.680 15:52:29 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:54.680 15:52:29 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size e9db9dcb-7880-4d25-9fc1-045a947afcc6 00:16:54.680 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=e9db9dcb-7880-4d25-9fc1-045a947afcc6 00:16:54.680 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:54.680 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:54.680 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:54.680 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9db9dcb-7880-4d25-9fc1-045a947afcc6 00:16:54.939 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:54.939 { 00:16:54.939 "name": "e9db9dcb-7880-4d25-9fc1-045a947afcc6", 00:16:54.939 "aliases": [ 00:16:54.939 "lvs/nvme0n1p0" 00:16:54.939 ], 00:16:54.939 "product_name": "Logical Volume", 00:16:54.939 "block_size": 4096, 00:16:54.939 "num_blocks": 26476544, 00:16:54.939 "uuid": "e9db9dcb-7880-4d25-9fc1-045a947afcc6", 00:16:54.939 "assigned_rate_limits": { 00:16:54.939 "rw_ios_per_sec": 0, 00:16:54.939 "rw_mbytes_per_sec": 0, 00:16:54.939 "r_mbytes_per_sec": 0, 00:16:54.939 "w_mbytes_per_sec": 0 00:16:54.939 }, 00:16:54.939 "claimed": false, 00:16:54.939 "zoned": false, 00:16:54.939 "supported_io_types": { 00:16:54.939 "read": true, 00:16:54.939 "write": true, 00:16:54.939 "unmap": true, 00:16:54.939 "write_zeroes": true, 00:16:54.939 "flush": false, 00:16:54.939 "reset": true, 00:16:54.939 "compare": false, 00:16:54.939 "compare_and_write": false, 00:16:54.939 "abort": false, 00:16:54.939 "nvme_admin": false, 00:16:54.939 "nvme_io": false 00:16:54.939 }, 00:16:54.939 "driver_specific": { 00:16:54.939 "lvol": { 00:16:54.939 "lvol_store_uuid": "77f7c260-4873-4959-882c-5afc0f96e76f", 00:16:54.939 "base_bdev": "nvme0n1", 00:16:54.939 "thin_provision": true, 00:16:54.939 "num_allocated_clusters": 0, 00:16:54.939 "snapshot": false, 00:16:54.939 "clone": false, 00:16:54.939 "esnap_clone": false 00:16:54.939 } 00:16:54.939 } 00:16:54.939 } 00:16:54.939 ]' 00:16:54.939 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:54.939 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:54.939 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:54.939 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:54.939 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:54.939 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:16:54.939 15:52:29 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:54.939 15:52:29 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:54.939 15:52:29 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:55.197 15:52:29 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:55.197 15:52:29 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:55.197 15:52:29 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size e9db9dcb-7880-4d25-9fc1-045a947afcc6 00:16:55.197 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=e9db9dcb-7880-4d25-9fc1-045a947afcc6 00:16:55.197 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:55.197 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:55.197 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:55.197 15:52:29 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9db9dcb-7880-4d25-9fc1-045a947afcc6 00:16:55.455 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:55.455 { 00:16:55.455 "name": "e9db9dcb-7880-4d25-9fc1-045a947afcc6", 00:16:55.455 "aliases": [ 00:16:55.455 "lvs/nvme0n1p0" 00:16:55.455 ], 00:16:55.455 "product_name": "Logical Volume", 00:16:55.455 "block_size": 4096, 00:16:55.455 "num_blocks": 26476544, 00:16:55.455 "uuid": "e9db9dcb-7880-4d25-9fc1-045a947afcc6", 00:16:55.456 "assigned_rate_limits": { 00:16:55.456 "rw_ios_per_sec": 0, 00:16:55.456 "rw_mbytes_per_sec": 0, 00:16:55.456 "r_mbytes_per_sec": 0, 00:16:55.456 "w_mbytes_per_sec": 0 00:16:55.456 }, 00:16:55.456 "claimed": false, 00:16:55.456 "zoned": false, 00:16:55.456 "supported_io_types": { 00:16:55.456 "read": true, 00:16:55.456 "write": true, 00:16:55.456 "unmap": true, 00:16:55.456 "write_zeroes": true, 00:16:55.456 "flush": false, 00:16:55.456 "reset": true, 00:16:55.456 "compare": false, 00:16:55.456 "compare_and_write": false, 00:16:55.456 "abort": false, 00:16:55.456 "nvme_admin": false, 00:16:55.456 "nvme_io": false 00:16:55.456 }, 00:16:55.456 "driver_specific": { 00:16:55.456 "lvol": { 00:16:55.456 "lvol_store_uuid": "77f7c260-4873-4959-882c-5afc0f96e76f", 00:16:55.456 "base_bdev": "nvme0n1", 00:16:55.456 "thin_provision": true, 00:16:55.456 "num_allocated_clusters": 0, 00:16:55.456 "snapshot": false, 00:16:55.456 "clone": false, 00:16:55.456 "esnap_clone": false 00:16:55.456 } 00:16:55.456 } 00:16:55.456 } 00:16:55.456 ]' 00:16:55.456 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:55.456 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:55.456 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:55.456 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:55.456 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:55.456 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:16:55.456 15:52:30 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:55.456 15:52:30 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:55.714 15:52:30 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:55.714 15:52:30 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:55.714 15:52:30 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size e9db9dcb-7880-4d25-9fc1-045a947afcc6 00:16:55.714 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=e9db9dcb-7880-4d25-9fc1-045a947afcc6 00:16:55.714 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:55.714 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:16:55.714 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:16:55.714 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9db9dcb-7880-4d25-9fc1-045a947afcc6 00:16:55.972 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:55.972 { 00:16:55.972 "name": "e9db9dcb-7880-4d25-9fc1-045a947afcc6", 00:16:55.972 "aliases": [ 00:16:55.972 "lvs/nvme0n1p0" 00:16:55.972 ], 00:16:55.972 "product_name": "Logical Volume", 00:16:55.972 "block_size": 4096, 00:16:55.972 "num_blocks": 26476544, 00:16:55.972 "uuid": "e9db9dcb-7880-4d25-9fc1-045a947afcc6", 00:16:55.972 "assigned_rate_limits": { 00:16:55.972 "rw_ios_per_sec": 0, 00:16:55.972 "rw_mbytes_per_sec": 0, 00:16:55.972 "r_mbytes_per_sec": 0, 00:16:55.972 "w_mbytes_per_sec": 0 00:16:55.972 }, 00:16:55.972 "claimed": false, 00:16:55.972 "zoned": false, 00:16:55.972 "supported_io_types": { 00:16:55.972 "read": true, 00:16:55.972 "write": true, 00:16:55.972 "unmap": true, 00:16:55.972 "write_zeroes": true, 00:16:55.972 "flush": false, 00:16:55.972 "reset": true, 00:16:55.972 "compare": false, 00:16:55.972 "compare_and_write": false, 00:16:55.972 "abort": false, 00:16:55.972 "nvme_admin": false, 00:16:55.972 "nvme_io": false 00:16:55.972 }, 00:16:55.972 "driver_specific": { 00:16:55.972 "lvol": { 00:16:55.972 "lvol_store_uuid": "77f7c260-4873-4959-882c-5afc0f96e76f", 00:16:55.972 "base_bdev": "nvme0n1", 00:16:55.972 "thin_provision": true, 00:16:55.972 "num_allocated_clusters": 0, 00:16:55.972 "snapshot": false, 00:16:55.972 "clone": false, 00:16:55.972 "esnap_clone": false 00:16:55.972 } 00:16:55.972 } 00:16:55.972 } 00:16:55.972 ]' 00:16:55.973 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:55.973 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:16:55.973 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:55.973 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:55.973 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:55.973 15:52:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:16:55.973 15:52:30 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:55.973 15:52:30 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e9db9dcb-7880-4d25-9fc1-045a947afcc6 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:56.232 [2024-07-20 15:52:30.769435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.232 [2024-07-20 15:52:30.769482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:56.232 [2024-07-20 15:52:30.769501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:56.232 [2024-07-20 15:52:30.769512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.232 [2024-07-20 15:52:30.772052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.232 [2024-07-20 15:52:30.772087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.232 [2024-07-20 15:52:30.772102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.510 ms 00:16:56.232 [2024-07-20 15:52:30.772113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.232 [2024-07-20 15:52:30.772241] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:56.232 [2024-07-20 15:52:30.772499] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:56.232 [2024-07-20 15:52:30.772534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.232 [2024-07-20 15:52:30.772545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.232 [2024-07-20 15:52:30.772558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:16:56.232 [2024-07-20 15:52:30.772571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.232 [2024-07-20 15:52:30.772690] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 97f98535-0125-4c1a-b8ee-497ceb063ba4 00:16:56.232 [2024-07-20 15:52:30.774082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.232 [2024-07-20 15:52:30.774119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:56.232 [2024-07-20 15:52:30.774132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:56.232 [2024-07-20 15:52:30.774144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.232 [2024-07-20 15:52:30.781543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.232 [2024-07-20 15:52:30.781576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.232 [2024-07-20 15:52:30.781589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.318 ms 00:16:56.232 [2024-07-20 15:52:30.781602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.232 [2024-07-20 15:52:30.781732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.232 [2024-07-20 15:52:30.781753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.233 [2024-07-20 15:52:30.781764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:56.233 [2024-07-20 15:52:30.781778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.233 [2024-07-20 15:52:30.781826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.233 [2024-07-20 15:52:30.781841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:56.233 [2024-07-20 15:52:30.781852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:56.233 [2024-07-20 15:52:30.781864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.233 [2024-07-20 15:52:30.781901] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:56.233 [2024-07-20 15:52:30.783683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.233 [2024-07-20 15:52:30.783724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.233 [2024-07-20 15:52:30.783739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.786 ms 00:16:56.233 [2024-07-20 15:52:30.783752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.233 [2024-07-20 15:52:30.783810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.233 [2024-07-20 15:52:30.783821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:56.233 [2024-07-20 15:52:30.783835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:56.233 [2024-07-20 15:52:30.783845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.233 [2024-07-20 15:52:30.783885] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:56.233 [2024-07-20 15:52:30.784035] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:56.233 [2024-07-20 15:52:30.784054] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:56.233 [2024-07-20 15:52:30.784068] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:56.233 [2024-07-20 15:52:30.784084] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:56.233 [2024-07-20 15:52:30.784096] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:56.233 [2024-07-20 15:52:30.784110] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:56.233 [2024-07-20 15:52:30.784124] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:56.233 [2024-07-20 15:52:30.784135] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:56.233 [2024-07-20 15:52:30.784146] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:56.233 [2024-07-20 15:52:30.784159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.233 [2024-07-20 15:52:30.784172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:56.233 [2024-07-20 15:52:30.784196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:16:56.233 [2024-07-20 15:52:30.784207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.233 [2024-07-20 15:52:30.784293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.233 [2024-07-20 15:52:30.784303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:56.233 [2024-07-20 15:52:30.784332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:56.233 [2024-07-20 15:52:30.784342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.233 [2024-07-20 15:52:30.784468] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:56.233 [2024-07-20 15:52:30.784481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:56.233 [2024-07-20 15:52:30.784499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.233 [2024-07-20 15:52:30.784509] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.233 [2024-07-20 15:52:30.784521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:56.233 [2024-07-20 15:52:30.784530] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:56.233 [2024-07-20 15:52:30.784542] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:56.233 [2024-07-20 15:52:30.784552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:56.233 [2024-07-20 15:52:30.784564] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:56.233 [2024-07-20 15:52:30.784574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.233 [2024-07-20 15:52:30.784585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:56.233 [2024-07-20 15:52:30.784594] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:56.233 [2024-07-20 15:52:30.784606] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.233 [2024-07-20 15:52:30.784616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:56.233 [2024-07-20 15:52:30.784630] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:56.233 [2024-07-20 15:52:30.784639] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.233 [2024-07-20 15:52:30.784652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:56.233 [2024-07-20 15:52:30.784662] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:56.233 [2024-07-20 15:52:30.784673] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.233 [2024-07-20 15:52:30.784683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:56.233 [2024-07-20 15:52:30.784695] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:56.233 [2024-07-20 15:52:30.784704] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.233 [2024-07-20 15:52:30.784715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:56.233 [2024-07-20 15:52:30.784725] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:56.233 [2024-07-20 15:52:30.784737] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.233 [2024-07-20 15:52:30.784747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:56.233 [2024-07-20 15:52:30.784759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:56.233 [2024-07-20 15:52:30.784768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.233 [2024-07-20 15:52:30.784781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:56.233 [2024-07-20 15:52:30.784791] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:56.233 [2024-07-20 15:52:30.784805] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.233 [2024-07-20 15:52:30.784814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:56.233 [2024-07-20 15:52:30.784826] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:56.233 [2024-07-20 15:52:30.784834] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.233 [2024-07-20 15:52:30.784846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:56.233 [2024-07-20 15:52:30.784869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:56.233 [2024-07-20 15:52:30.784881] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.233 [2024-07-20 15:52:30.784891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:56.233 [2024-07-20 15:52:30.784902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:56.233 [2024-07-20 15:52:30.784912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.233 [2024-07-20 15:52:30.784924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:56.233 [2024-07-20 15:52:30.784933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:56.233 [2024-07-20 15:52:30.784945] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.233 [2024-07-20 15:52:30.784954] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:56.233 [2024-07-20 15:52:30.784967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:56.233 [2024-07-20 15:52:30.784977] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.233 [2024-07-20 15:52:30.784992] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.233 [2024-07-20 15:52:30.785002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:56.233 [2024-07-20 15:52:30.785015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:56.233 [2024-07-20 15:52:30.785025] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:56.233 [2024-07-20 15:52:30.785037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:56.233 [2024-07-20 15:52:30.785046] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:56.233 [2024-07-20 15:52:30.785060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:56.233 [2024-07-20 15:52:30.785073] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:56.233 [2024-07-20 15:52:30.785089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.233 [2024-07-20 15:52:30.785101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:56.233 [2024-07-20 15:52:30.785114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:56.233 [2024-07-20 15:52:30.785125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:56.233 [2024-07-20 15:52:30.785138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:56.233 [2024-07-20 15:52:30.785148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:56.233 [2024-07-20 15:52:30.785161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:56.233 [2024-07-20 15:52:30.785172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:56.233 [2024-07-20 15:52:30.785187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:56.233 [2024-07-20 15:52:30.785197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:56.233 [2024-07-20 15:52:30.785210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:56.233 [2024-07-20 15:52:30.785221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:56.233 [2024-07-20 15:52:30.785233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:56.233 [2024-07-20 15:52:30.785244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:56.233 [2024-07-20 15:52:30.785257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:56.233 [2024-07-20 15:52:30.785267] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:56.233 [2024-07-20 15:52:30.785281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.233 [2024-07-20 15:52:30.785295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:56.234 [2024-07-20 15:52:30.785307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:56.234 [2024-07-20 15:52:30.785318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:56.234 [2024-07-20 15:52:30.785331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:56.234 [2024-07-20 15:52:30.785342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.234 [2024-07-20 15:52:30.785366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:56.234 [2024-07-20 15:52:30.785377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:16:56.234 [2024-07-20 15:52:30.785392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.234 [2024-07-20 15:52:30.785480] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:56.234 [2024-07-20 15:52:30.785495] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:59.526 [2024-07-20 15:52:34.089674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.526 [2024-07-20 15:52:34.089742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:59.526 [2024-07-20 15:52:34.089759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3309.556 ms 00:16:59.526 [2024-07-20 15:52:34.089776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.526 [2024-07-20 15:52:34.100890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.526 [2024-07-20 15:52:34.100941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:59.526 [2024-07-20 15:52:34.100974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.036 ms 00:16:59.526 [2024-07-20 15:52:34.100988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.526 [2024-07-20 15:52:34.101126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.526 [2024-07-20 15:52:34.101144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:59.526 [2024-07-20 15:52:34.101155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:59.526 [2024-07-20 15:52:34.101171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.526 [2024-07-20 15:52:34.121379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.526 [2024-07-20 15:52:34.121428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:59.526 [2024-07-20 15:52:34.121447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.199 ms 00:16:59.526 [2024-07-20 15:52:34.121485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.526 [2024-07-20 15:52:34.121600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.526 [2024-07-20 15:52:34.121619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:59.526 [2024-07-20 15:52:34.121649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:59.526 [2024-07-20 15:52:34.121670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.526 [2024-07-20 15:52:34.122137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.526 [2024-07-20 15:52:34.122164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:59.526 [2024-07-20 15:52:34.122178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:16:59.526 [2024-07-20 15:52:34.122195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.526 [2024-07-20 15:52:34.122384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.526 [2024-07-20 15:52:34.122410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:59.526 [2024-07-20 15:52:34.122438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:16:59.526 [2024-07-20 15:52:34.122470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.526 [2024-07-20 15:52:34.130373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.526 [2024-07-20 15:52:34.130411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:59.526 [2024-07-20 15:52:34.130425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.870 ms 00:16:59.526 [2024-07-20 15:52:34.130438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.526 [2024-07-20 15:52:34.138247] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:59.526 [2024-07-20 15:52:34.154685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.526 [2024-07-20 15:52:34.154728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:59.526 [2024-07-20 15:52:34.154746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.176 ms 00:16:59.526 [2024-07-20 15:52:34.154756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.526 [2024-07-20 15:52:34.230385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.527 [2024-07-20 15:52:34.230435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:59.527 [2024-07-20 15:52:34.230454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.646 ms 00:16:59.527 [2024-07-20 15:52:34.230465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.527 [2024-07-20 15:52:34.230674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.527 [2024-07-20 15:52:34.230686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:59.527 [2024-07-20 15:52:34.230701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:16:59.527 [2024-07-20 15:52:34.230711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.527 [2024-07-20 15:52:34.234302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.527 [2024-07-20 15:52:34.234340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:59.527 [2024-07-20 15:52:34.234365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.558 ms 00:16:59.527 [2024-07-20 15:52:34.234376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.527 [2024-07-20 15:52:34.237344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.527 [2024-07-20 15:52:34.237384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:59.527 [2024-07-20 15:52:34.237401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.914 ms 00:16:59.527 [2024-07-20 15:52:34.237410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.527 [2024-07-20 15:52:34.237697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.527 [2024-07-20 15:52:34.237711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:59.527 [2024-07-20 15:52:34.237725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:16:59.527 [2024-07-20 15:52:34.237736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.527 [2024-07-20 15:52:34.274640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.527 [2024-07-20 15:52:34.274685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:59.527 [2024-07-20 15:52:34.274703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.922 ms 00:16:59.527 [2024-07-20 15:52:34.274714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.527 [2024-07-20 15:52:34.279202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.527 [2024-07-20 15:52:34.279241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:59.527 [2024-07-20 15:52:34.279257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.427 ms 00:16:59.527 [2024-07-20 15:52:34.279268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.527 [2024-07-20 15:52:34.282629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.527 [2024-07-20 15:52:34.282660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:59.527 [2024-07-20 15:52:34.282676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:16:59.527 [2024-07-20 15:52:34.282685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.527 [2024-07-20 15:52:34.286432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.527 [2024-07-20 15:52:34.286464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:59.527 [2024-07-20 15:52:34.286480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.700 ms 00:16:59.527 [2024-07-20 15:52:34.286489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.527 [2024-07-20 15:52:34.286549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.527 [2024-07-20 15:52:34.286576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:59.527 [2024-07-20 15:52:34.286591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:59.527 [2024-07-20 15:52:34.286602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.527 [2024-07-20 15:52:34.286692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.527 [2024-07-20 15:52:34.286715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:59.527 [2024-07-20 15:52:34.286729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:59.527 [2024-07-20 15:52:34.286740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.527 [2024-07-20 15:52:34.287708] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:59.527 [2024-07-20 15:52:34.288631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3523.703 ms, result 0 00:16:59.527 [2024-07-20 15:52:34.289335] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:59.527 { 00:16:59.527 "name": "ftl0", 00:16:59.527 "uuid": "97f98535-0125-4c1a-b8ee-497ceb063ba4" 00:16:59.527 } 00:16:59.527 15:52:34 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:59.527 15:52:34 ftl.ftl_trim -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:16:59.527 15:52:34 ftl.ftl_trim -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:59.527 15:52:34 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local i 00:16:59.527 15:52:34 ftl.ftl_trim -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:59.527 15:52:34 ftl.ftl_trim -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:59.527 15:52:34 ftl.ftl_trim -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:59.785 15:52:34 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:00.043 [ 00:17:00.043 { 00:17:00.043 "name": "ftl0", 00:17:00.043 "aliases": [ 00:17:00.043 "97f98535-0125-4c1a-b8ee-497ceb063ba4" 00:17:00.043 ], 00:17:00.043 "product_name": "FTL disk", 00:17:00.043 "block_size": 4096, 00:17:00.043 "num_blocks": 23592960, 00:17:00.043 "uuid": "97f98535-0125-4c1a-b8ee-497ceb063ba4", 00:17:00.043 "assigned_rate_limits": { 00:17:00.043 "rw_ios_per_sec": 0, 00:17:00.043 "rw_mbytes_per_sec": 0, 00:17:00.043 "r_mbytes_per_sec": 0, 00:17:00.043 "w_mbytes_per_sec": 0 00:17:00.043 }, 00:17:00.043 "claimed": false, 00:17:00.043 "zoned": false, 00:17:00.043 "supported_io_types": { 00:17:00.043 "read": true, 00:17:00.043 "write": true, 00:17:00.043 "unmap": true, 00:17:00.043 "write_zeroes": true, 00:17:00.043 "flush": true, 00:17:00.043 "reset": false, 00:17:00.043 "compare": false, 00:17:00.043 "compare_and_write": false, 00:17:00.043 "abort": false, 00:17:00.043 "nvme_admin": false, 00:17:00.043 "nvme_io": false 00:17:00.043 }, 00:17:00.043 "driver_specific": { 00:17:00.043 "ftl": { 00:17:00.043 "base_bdev": "e9db9dcb-7880-4d25-9fc1-045a947afcc6", 00:17:00.043 "cache": "nvc0n1p0" 00:17:00.043 } 00:17:00.043 } 00:17:00.043 } 00:17:00.043 ] 00:17:00.043 15:52:34 ftl.ftl_trim -- common/autotest_common.sh@903 -- # return 0 00:17:00.043 15:52:34 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:00.043 15:52:34 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:00.301 15:52:34 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:00.301 15:52:34 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:00.301 15:52:35 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:00.301 { 00:17:00.301 "name": "ftl0", 00:17:00.301 "aliases": [ 00:17:00.301 "97f98535-0125-4c1a-b8ee-497ceb063ba4" 00:17:00.301 ], 00:17:00.301 "product_name": "FTL disk", 00:17:00.301 "block_size": 4096, 00:17:00.301 "num_blocks": 23592960, 00:17:00.301 "uuid": "97f98535-0125-4c1a-b8ee-497ceb063ba4", 00:17:00.301 "assigned_rate_limits": { 00:17:00.301 "rw_ios_per_sec": 0, 00:17:00.301 "rw_mbytes_per_sec": 0, 00:17:00.301 "r_mbytes_per_sec": 0, 00:17:00.301 "w_mbytes_per_sec": 0 00:17:00.301 }, 00:17:00.301 "claimed": false, 00:17:00.301 "zoned": false, 00:17:00.301 "supported_io_types": { 00:17:00.301 "read": true, 00:17:00.301 "write": true, 00:17:00.301 "unmap": true, 00:17:00.301 "write_zeroes": true, 00:17:00.301 "flush": true, 00:17:00.301 "reset": false, 00:17:00.301 "compare": false, 00:17:00.301 "compare_and_write": false, 00:17:00.301 "abort": false, 00:17:00.301 "nvme_admin": false, 00:17:00.301 "nvme_io": false 00:17:00.301 }, 00:17:00.301 "driver_specific": { 00:17:00.301 "ftl": { 00:17:00.301 "base_bdev": "e9db9dcb-7880-4d25-9fc1-045a947afcc6", 00:17:00.301 "cache": "nvc0n1p0" 00:17:00.301 } 00:17:00.301 } 00:17:00.301 } 00:17:00.301 ]' 00:17:00.301 15:52:35 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:00.301 15:52:35 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:00.301 15:52:35 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:00.560 [2024-07-20 15:52:35.232442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.560 [2024-07-20 15:52:35.232495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:00.560 [2024-07-20 15:52:35.232511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:00.560 [2024-07-20 15:52:35.232540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.560 [2024-07-20 15:52:35.232581] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:00.560 [2024-07-20 15:52:35.233257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.560 [2024-07-20 15:52:35.233274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:00.560 [2024-07-20 15:52:35.233288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:17:00.560 [2024-07-20 15:52:35.233298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.560 [2024-07-20 15:52:35.233803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.560 [2024-07-20 15:52:35.233821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:00.560 [2024-07-20 15:52:35.233835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:17:00.560 [2024-07-20 15:52:35.233845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.560 [2024-07-20 15:52:35.236678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.560 [2024-07-20 15:52:35.236701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:00.560 [2024-07-20 15:52:35.236715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.804 ms 00:17:00.560 [2024-07-20 15:52:35.236739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.560 [2024-07-20 15:52:35.242469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.560 [2024-07-20 15:52:35.242504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:00.560 [2024-07-20 15:52:35.242538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.669 ms 00:17:00.560 [2024-07-20 15:52:35.242548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.560 [2024-07-20 15:52:35.244282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.560 [2024-07-20 15:52:35.244319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:00.560 [2024-07-20 15:52:35.244335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.647 ms 00:17:00.560 [2024-07-20 15:52:35.244344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.560 [2024-07-20 15:52:35.249204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.560 [2024-07-20 15:52:35.249242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:00.560 [2024-07-20 15:52:35.249259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.804 ms 00:17:00.560 [2024-07-20 15:52:35.249269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.560 [2024-07-20 15:52:35.249462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.560 [2024-07-20 15:52:35.249480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:00.560 [2024-07-20 15:52:35.249494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:17:00.560 [2024-07-20 15:52:35.249505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.560 [2024-07-20 15:52:35.251337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.560 [2024-07-20 15:52:35.251381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:00.560 [2024-07-20 15:52:35.251396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.796 ms 00:17:00.560 [2024-07-20 15:52:35.251406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.560 [2024-07-20 15:52:35.252930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.560 [2024-07-20 15:52:35.252963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:00.560 [2024-07-20 15:52:35.252978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.475 ms 00:17:00.560 [2024-07-20 15:52:35.252987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.560 [2024-07-20 15:52:35.254119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.560 [2024-07-20 15:52:35.254150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:00.560 [2024-07-20 15:52:35.254165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.087 ms 00:17:00.560 [2024-07-20 15:52:35.254174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.560 [2024-07-20 15:52:35.255345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.560 [2024-07-20 15:52:35.255387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:00.560 [2024-07-20 15:52:35.255402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:17:00.560 [2024-07-20 15:52:35.255411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.560 [2024-07-20 15:52:35.255454] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:00.560 [2024-07-20 15:52:35.255484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.255988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.256001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.256012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.256025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.256036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.256049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.256060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.256073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.256084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.256097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.256109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.256125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:00.560 [2024-07-20 15:52:35.256135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:00.561 [2024-07-20 15:52:35.256773] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:00.561 [2024-07-20 15:52:35.256786] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f98535-0125-4c1a-b8ee-497ceb063ba4 00:17:00.561 [2024-07-20 15:52:35.256797] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:00.561 [2024-07-20 15:52:35.256810] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:00.561 [2024-07-20 15:52:35.256819] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:00.561 [2024-07-20 15:52:35.256848] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:00.561 [2024-07-20 15:52:35.256858] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:00.561 [2024-07-20 15:52:35.256871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:00.561 [2024-07-20 15:52:35.256882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:00.561 [2024-07-20 15:52:35.256894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:00.561 [2024-07-20 15:52:35.256903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:00.561 [2024-07-20 15:52:35.256916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.561 [2024-07-20 15:52:35.256925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:00.561 [2024-07-20 15:52:35.256938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.466 ms 00:17:00.561 [2024-07-20 15:52:35.256948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.258823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.561 [2024-07-20 15:52:35.258847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:00.561 [2024-07-20 15:52:35.258861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.820 ms 00:17:00.561 [2024-07-20 15:52:35.258872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.259003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.561 [2024-07-20 15:52:35.259015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:00.561 [2024-07-20 15:52:35.259029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:00.561 [2024-07-20 15:52:35.259052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.266110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.561 [2024-07-20 15:52:35.266136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:00.561 [2024-07-20 15:52:35.266150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.561 [2024-07-20 15:52:35.266160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.266235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.561 [2024-07-20 15:52:35.266247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:00.561 [2024-07-20 15:52:35.266260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.561 [2024-07-20 15:52:35.266279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.266348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.561 [2024-07-20 15:52:35.266401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:00.561 [2024-07-20 15:52:35.266418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.561 [2024-07-20 15:52:35.266429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.266461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.561 [2024-07-20 15:52:35.266472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:00.561 [2024-07-20 15:52:35.266484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.561 [2024-07-20 15:52:35.266494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.278950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.561 [2024-07-20 15:52:35.279002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:00.561 [2024-07-20 15:52:35.279018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.561 [2024-07-20 15:52:35.279032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.287350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.561 [2024-07-20 15:52:35.287395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.561 [2024-07-20 15:52:35.287411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.561 [2024-07-20 15:52:35.287437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.287502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.561 [2024-07-20 15:52:35.287514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:00.561 [2024-07-20 15:52:35.287528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.561 [2024-07-20 15:52:35.287541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.287600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.561 [2024-07-20 15:52:35.287611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:00.561 [2024-07-20 15:52:35.287635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.561 [2024-07-20 15:52:35.287645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.287743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.561 [2024-07-20 15:52:35.287755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:00.561 [2024-07-20 15:52:35.287768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.561 [2024-07-20 15:52:35.287778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.287842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.561 [2024-07-20 15:52:35.287855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:00.561 [2024-07-20 15:52:35.287867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.561 [2024-07-20 15:52:35.287877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.287946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.561 [2024-07-20 15:52:35.287958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:00.561 [2024-07-20 15:52:35.287971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.561 [2024-07-20 15:52:35.287981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.288043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:00.561 [2024-07-20 15:52:35.288054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:00.561 [2024-07-20 15:52:35.288079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:00.561 [2024-07-20 15:52:35.288089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.561 [2024-07-20 15:52:35.288272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.894 ms, result 0 00:17:00.561 true 00:17:00.561 15:52:35 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 88716 00:17:00.561 15:52:35 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 88716 ']' 00:17:00.561 15:52:35 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 88716 00:17:00.561 15:52:35 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:17:00.561 15:52:35 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:00.561 15:52:35 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88716 00:17:00.561 killing process with pid 88716 00:17:00.561 15:52:35 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:00.561 15:52:35 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:00.561 15:52:35 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88716' 00:17:00.561 15:52:35 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 88716 00:17:00.561 15:52:35 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 88716 00:17:03.846 15:52:38 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:04.412 65536+0 records in 00:17:04.412 65536+0 records out 00:17:04.412 268435456 bytes (268 MB, 256 MiB) copied, 0.920855 s, 292 MB/s 00:17:04.412 15:52:39 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:04.670 [2024-07-20 15:52:39.220268] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:04.670 [2024-07-20 15:52:39.220424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88892 ] 00:17:04.670 [2024-07-20 15:52:39.371651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.670 [2024-07-20 15:52:39.412035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.930 [2024-07-20 15:52:39.512288] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:04.930 [2024-07-20 15:52:39.512399] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:04.930 [2024-07-20 15:52:39.663564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.930 [2024-07-20 15:52:39.663608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:04.930 [2024-07-20 15:52:39.663623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:04.930 [2024-07-20 15:52:39.663648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.930 [2024-07-20 15:52:39.666017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.930 [2024-07-20 15:52:39.666054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:04.930 [2024-07-20 15:52:39.666066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.353 ms 00:17:04.930 [2024-07-20 15:52:39.666075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.930 [2024-07-20 15:52:39.666164] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:04.930 [2024-07-20 15:52:39.666415] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:04.930 [2024-07-20 15:52:39.666435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.930 [2024-07-20 15:52:39.666446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:04.930 [2024-07-20 15:52:39.666459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:17:04.930 [2024-07-20 15:52:39.666476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.930 [2024-07-20 15:52:39.667905] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:04.930 [2024-07-20 15:52:39.670411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.930 [2024-07-20 15:52:39.670444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:04.930 [2024-07-20 15:52:39.670457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.511 ms 00:17:04.930 [2024-07-20 15:52:39.670466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.930 [2024-07-20 15:52:39.670531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.930 [2024-07-20 15:52:39.670544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:04.930 [2024-07-20 15:52:39.670555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:04.930 [2024-07-20 15:52:39.670567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.930 [2024-07-20 15:52:39.677201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.930 [2024-07-20 15:52:39.677225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:04.930 [2024-07-20 15:52:39.677236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.602 ms 00:17:04.930 [2024-07-20 15:52:39.677246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.930 [2024-07-20 15:52:39.677371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.930 [2024-07-20 15:52:39.677385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:04.930 [2024-07-20 15:52:39.677419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:04.930 [2024-07-20 15:52:39.677432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.930 [2024-07-20 15:52:39.677463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.930 [2024-07-20 15:52:39.677477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:04.930 [2024-07-20 15:52:39.677487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:04.930 [2024-07-20 15:52:39.677496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.930 [2024-07-20 15:52:39.677518] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:04.930 [2024-07-20 15:52:39.679120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.930 [2024-07-20 15:52:39.679148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:04.930 [2024-07-20 15:52:39.679164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.610 ms 00:17:04.930 [2024-07-20 15:52:39.679173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.930 [2024-07-20 15:52:39.679219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.930 [2024-07-20 15:52:39.679230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:04.930 [2024-07-20 15:52:39.679241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:04.930 [2024-07-20 15:52:39.679250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.930 [2024-07-20 15:52:39.679270] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:04.930 [2024-07-20 15:52:39.679291] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:04.930 [2024-07-20 15:52:39.679337] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:04.930 [2024-07-20 15:52:39.679378] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:04.930 [2024-07-20 15:52:39.679462] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:04.930 [2024-07-20 15:52:39.679476] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:04.930 [2024-07-20 15:52:39.679488] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:04.930 [2024-07-20 15:52:39.679501] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:04.930 [2024-07-20 15:52:39.679513] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:04.930 [2024-07-20 15:52:39.679524] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:04.930 [2024-07-20 15:52:39.679534] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:04.930 [2024-07-20 15:52:39.679543] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:04.930 [2024-07-20 15:52:39.679556] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:04.930 [2024-07-20 15:52:39.679567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.930 [2024-07-20 15:52:39.679577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:04.930 [2024-07-20 15:52:39.679597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:17:04.930 [2024-07-20 15:52:39.679607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.930 [2024-07-20 15:52:39.679688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.930 [2024-07-20 15:52:39.679714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:04.930 [2024-07-20 15:52:39.679724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:04.930 [2024-07-20 15:52:39.679733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.930 [2024-07-20 15:52:39.679819] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:04.930 [2024-07-20 15:52:39.679832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:04.931 [2024-07-20 15:52:39.679843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:04.931 [2024-07-20 15:52:39.679853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.931 [2024-07-20 15:52:39.679869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:04.931 [2024-07-20 15:52:39.679879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:04.931 [2024-07-20 15:52:39.679888] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:04.931 [2024-07-20 15:52:39.679897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:04.931 [2024-07-20 15:52:39.679907] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:04.931 [2024-07-20 15:52:39.679916] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:04.931 [2024-07-20 15:52:39.679925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:04.931 [2024-07-20 15:52:39.679934] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:04.931 [2024-07-20 15:52:39.679946] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:04.931 [2024-07-20 15:52:39.679955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:04.931 [2024-07-20 15:52:39.679965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:04.931 [2024-07-20 15:52:39.679975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.931 [2024-07-20 15:52:39.679984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:04.931 [2024-07-20 15:52:39.679994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:04.931 [2024-07-20 15:52:39.680003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.931 [2024-07-20 15:52:39.680012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:04.931 [2024-07-20 15:52:39.680021] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:04.931 [2024-07-20 15:52:39.680029] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.931 [2024-07-20 15:52:39.680038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:04.931 [2024-07-20 15:52:39.680048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:04.931 [2024-07-20 15:52:39.680057] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.931 [2024-07-20 15:52:39.680066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:04.931 [2024-07-20 15:52:39.680075] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:04.931 [2024-07-20 15:52:39.680084] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.931 [2024-07-20 15:52:39.680101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:04.931 [2024-07-20 15:52:39.680111] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:04.931 [2024-07-20 15:52:39.680120] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.931 [2024-07-20 15:52:39.680128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:04.931 [2024-07-20 15:52:39.680137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:04.931 [2024-07-20 15:52:39.680146] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:04.931 [2024-07-20 15:52:39.680156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:04.931 [2024-07-20 15:52:39.680166] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:04.931 [2024-07-20 15:52:39.680175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:04.931 [2024-07-20 15:52:39.680184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:04.931 [2024-07-20 15:52:39.680193] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:04.931 [2024-07-20 15:52:39.680202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.931 [2024-07-20 15:52:39.680211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:04.931 [2024-07-20 15:52:39.680221] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:04.931 [2024-07-20 15:52:39.680230] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.931 [2024-07-20 15:52:39.680238] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:04.931 [2024-07-20 15:52:39.680251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:04.931 [2024-07-20 15:52:39.680260] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:04.931 [2024-07-20 15:52:39.680270] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.931 [2024-07-20 15:52:39.680279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:04.931 [2024-07-20 15:52:39.680288] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:04.931 [2024-07-20 15:52:39.680297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:04.931 [2024-07-20 15:52:39.680307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:04.931 [2024-07-20 15:52:39.680315] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:04.931 [2024-07-20 15:52:39.680324] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:04.931 [2024-07-20 15:52:39.680334] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:04.931 [2024-07-20 15:52:39.680346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:04.931 [2024-07-20 15:52:39.680357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:04.931 [2024-07-20 15:52:39.680367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:04.931 [2024-07-20 15:52:39.680377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:04.931 [2024-07-20 15:52:39.680415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:04.931 [2024-07-20 15:52:39.680425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:04.931 [2024-07-20 15:52:39.680438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:04.931 [2024-07-20 15:52:39.680448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:04.931 [2024-07-20 15:52:39.680458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:04.931 [2024-07-20 15:52:39.680468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:04.931 [2024-07-20 15:52:39.680478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:04.931 [2024-07-20 15:52:39.680488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:04.931 [2024-07-20 15:52:39.680499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:04.931 [2024-07-20 15:52:39.680509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:04.931 [2024-07-20 15:52:39.680519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:04.931 [2024-07-20 15:52:39.680529] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:04.931 [2024-07-20 15:52:39.680542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:04.931 [2024-07-20 15:52:39.680561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:04.931 [2024-07-20 15:52:39.680572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:04.931 [2024-07-20 15:52:39.680583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:04.931 [2024-07-20 15:52:39.680593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:04.931 [2024-07-20 15:52:39.680604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.931 [2024-07-20 15:52:39.680618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:04.931 [2024-07-20 15:52:39.680628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:17:04.931 [2024-07-20 15:52:39.680645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.931 [2024-07-20 15:52:39.702748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.931 [2024-07-20 15:52:39.702781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.931 [2024-07-20 15:52:39.702795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.085 ms 00:17:04.931 [2024-07-20 15:52:39.702828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.931 [2024-07-20 15:52:39.702981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.931 [2024-07-20 15:52:39.702996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:04.931 [2024-07-20 15:52:39.703010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:04.931 [2024-07-20 15:52:39.703022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.931 [2024-07-20 15:52:39.714245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.931 [2024-07-20 15:52:39.714450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:04.931 [2024-07-20 15:52:39.714594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.212 ms 00:17:04.931 [2024-07-20 15:52:39.714641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.931 [2024-07-20 15:52:39.714732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.931 [2024-07-20 15:52:39.714770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:04.931 [2024-07-20 15:52:39.714863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:04.931 [2024-07-20 15:52:39.714898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.931 [2024-07-20 15:52:39.715379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.931 [2024-07-20 15:52:39.715536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:04.931 [2024-07-20 15:52:39.715555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:17:04.931 [2024-07-20 15:52:39.715566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.931 [2024-07-20 15:52:39.715697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.931 [2024-07-20 15:52:39.715710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:04.931 [2024-07-20 15:52:39.715729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:04.931 [2024-07-20 15:52:39.715740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.931 [2024-07-20 15:52:39.722038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.931 [2024-07-20 15:52:39.722071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:04.931 [2024-07-20 15:52:39.722084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.285 ms 00:17:04.931 [2024-07-20 15:52:39.722095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.197 [2024-07-20 15:52:39.724704] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:05.197 [2024-07-20 15:52:39.724747] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:05.197 [2024-07-20 15:52:39.724763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.197 [2024-07-20 15:52:39.724777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:05.197 [2024-07-20 15:52:39.724788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.566 ms 00:17:05.197 [2024-07-20 15:52:39.724799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.197 [2024-07-20 15:52:39.737284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.197 [2024-07-20 15:52:39.737418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:05.197 [2024-07-20 15:52:39.737504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.440 ms 00:17:05.197 [2024-07-20 15:52:39.737545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.197 [2024-07-20 15:52:39.739210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.197 [2024-07-20 15:52:39.739336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:05.197 [2024-07-20 15:52:39.739418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:17:05.197 [2024-07-20 15:52:39.739452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.197 [2024-07-20 15:52:39.740917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.197 [2024-07-20 15:52:39.741037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:05.197 [2024-07-20 15:52:39.741104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:17:05.197 [2024-07-20 15:52:39.741138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.197 [2024-07-20 15:52:39.741454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.197 [2024-07-20 15:52:39.741577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:05.197 [2024-07-20 15:52:39.741597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:17:05.197 [2024-07-20 15:52:39.741607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.197 [2024-07-20 15:52:39.761561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.197 [2024-07-20 15:52:39.761622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:05.197 [2024-07-20 15:52:39.761639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.940 ms 00:17:05.197 [2024-07-20 15:52:39.761650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.197 [2024-07-20 15:52:39.767751] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:05.197 [2024-07-20 15:52:39.783324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.197 [2024-07-20 15:52:39.783373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:05.197 [2024-07-20 15:52:39.783404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.627 ms 00:17:05.197 [2024-07-20 15:52:39.783414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.197 [2024-07-20 15:52:39.783498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.197 [2024-07-20 15:52:39.783511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:05.197 [2024-07-20 15:52:39.783541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:05.197 [2024-07-20 15:52:39.783562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.197 [2024-07-20 15:52:39.783627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.197 [2024-07-20 15:52:39.783646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:05.197 [2024-07-20 15:52:39.783656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:05.197 [2024-07-20 15:52:39.783666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.197 [2024-07-20 15:52:39.783692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.197 [2024-07-20 15:52:39.783703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:05.197 [2024-07-20 15:52:39.783713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:05.197 [2024-07-20 15:52:39.783735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.197 [2024-07-20 15:52:39.783773] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:05.197 [2024-07-20 15:52:39.783784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.197 [2024-07-20 15:52:39.783800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:05.197 [2024-07-20 15:52:39.783810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:05.197 [2024-07-20 15:52:39.783820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.197 [2024-07-20 15:52:39.787505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.197 [2024-07-20 15:52:39.787540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:05.197 [2024-07-20 15:52:39.787562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.667 ms 00:17:05.197 [2024-07-20 15:52:39.787580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.197 [2024-07-20 15:52:39.787669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.197 [2024-07-20 15:52:39.787682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:05.197 [2024-07-20 15:52:39.787701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:05.197 [2024-07-20 15:52:39.787718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.197 [2024-07-20 15:52:39.788644] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:05.197 [2024-07-20 15:52:39.789578] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 124.987 ms, result 0 00:17:05.198 [2024-07-20 15:52:39.790236] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:05.198 [2024-07-20 15:52:39.800073] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:16.032  Copying: 23/256 [MB] (23 MBps) Copying: 47/256 [MB] (23 MBps) Copying: 71/256 [MB] (23 MBps) Copying: 94/256 [MB] (23 MBps) Copying: 117/256 [MB] (23 MBps) Copying: 140/256 [MB] (22 MBps) Copying: 163/256 [MB] (23 MBps) Copying: 187/256 [MB] (23 MBps) Copying: 210/256 [MB] (23 MBps) Copying: 234/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-20 15:52:50.741556] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:16.032 [2024-07-20 15:52:50.742871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.032 [2024-07-20 15:52:50.742893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:16.032 [2024-07-20 15:52:50.742907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:16.032 [2024-07-20 15:52:50.742917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.032 [2024-07-20 15:52:50.742937] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:16.032 [2024-07-20 15:52:50.743590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.032 [2024-07-20 15:52:50.743603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:16.032 [2024-07-20 15:52:50.743613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:17:16.032 [2024-07-20 15:52:50.743623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.032 [2024-07-20 15:52:50.745322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.032 [2024-07-20 15:52:50.745370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:16.032 [2024-07-20 15:52:50.745390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.682 ms 00:17:16.032 [2024-07-20 15:52:50.745400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.032 [2024-07-20 15:52:50.752125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.032 [2024-07-20 15:52:50.752155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:16.032 [2024-07-20 15:52:50.752166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.716 ms 00:17:16.032 [2024-07-20 15:52:50.752176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.032 [2024-07-20 15:52:50.757602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.032 [2024-07-20 15:52:50.757628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:16.033 [2024-07-20 15:52:50.757639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.399 ms 00:17:16.033 [2024-07-20 15:52:50.757654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.033 [2024-07-20 15:52:50.759155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.033 [2024-07-20 15:52:50.759183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:16.033 [2024-07-20 15:52:50.759194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.453 ms 00:17:16.033 [2024-07-20 15:52:50.759204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.033 [2024-07-20 15:52:50.763068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.033 [2024-07-20 15:52:50.763194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:16.033 [2024-07-20 15:52:50.763269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.840 ms 00:17:16.033 [2024-07-20 15:52:50.763305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.033 [2024-07-20 15:52:50.763437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.033 [2024-07-20 15:52:50.763532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:16.033 [2024-07-20 15:52:50.763574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:16.033 [2024-07-20 15:52:50.763603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.033 [2024-07-20 15:52:50.765664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.033 [2024-07-20 15:52:50.765788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:16.033 [2024-07-20 15:52:50.765861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.969 ms 00:17:16.033 [2024-07-20 15:52:50.765894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.033 [2024-07-20 15:52:50.767466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.033 [2024-07-20 15:52:50.767587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:16.033 [2024-07-20 15:52:50.767662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.521 ms 00:17:16.033 [2024-07-20 15:52:50.767696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.033 [2024-07-20 15:52:50.768837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.033 [2024-07-20 15:52:50.768949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:16.033 [2024-07-20 15:52:50.769019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.073 ms 00:17:16.033 [2024-07-20 15:52:50.769051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.033 [2024-07-20 15:52:50.770158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.033 [2024-07-20 15:52:50.770278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:16.033 [2024-07-20 15:52:50.770349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:17:16.033 [2024-07-20 15:52:50.770395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.033 [2024-07-20 15:52:50.770449] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:16.033 [2024-07-20 15:52:50.770555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.770991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:16.033 [2024-07-20 15:52:50.771212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:16.034 [2024-07-20 15:52:50.771649] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:16.034 [2024-07-20 15:52:50.771659] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f98535-0125-4c1a-b8ee-497ceb063ba4 00:17:16.034 [2024-07-20 15:52:50.771669] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:16.034 [2024-07-20 15:52:50.771680] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:16.034 [2024-07-20 15:52:50.771689] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:16.034 [2024-07-20 15:52:50.771700] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:16.034 [2024-07-20 15:52:50.771709] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:16.034 [2024-07-20 15:52:50.771723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:16.034 [2024-07-20 15:52:50.771732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:16.034 [2024-07-20 15:52:50.771741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:16.034 [2024-07-20 15:52:50.771750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:16.034 [2024-07-20 15:52:50.771760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.034 [2024-07-20 15:52:50.771770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:16.034 [2024-07-20 15:52:50.771787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.314 ms 00:17:16.034 [2024-07-20 15:52:50.771801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.034 [2024-07-20 15:52:50.773496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.034 [2024-07-20 15:52:50.773516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:16.034 [2024-07-20 15:52:50.773527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:17:16.034 [2024-07-20 15:52:50.773541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.034 [2024-07-20 15:52:50.773643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.034 [2024-07-20 15:52:50.773654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:16.034 [2024-07-20 15:52:50.773664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:16.034 [2024-07-20 15:52:50.773674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.034 [2024-07-20 15:52:50.780109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.034 [2024-07-20 15:52:50.780215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:16.034 [2024-07-20 15:52:50.780314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.034 [2024-07-20 15:52:50.780362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.034 [2024-07-20 15:52:50.780462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.034 [2024-07-20 15:52:50.780494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:16.034 [2024-07-20 15:52:50.780524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.034 [2024-07-20 15:52:50.780552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.034 [2024-07-20 15:52:50.780681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.034 [2024-07-20 15:52:50.780721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:16.034 [2024-07-20 15:52:50.780751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.034 [2024-07-20 15:52:50.780779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.034 [2024-07-20 15:52:50.780823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.034 [2024-07-20 15:52:50.780855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:16.034 [2024-07-20 15:52:50.780973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.034 [2024-07-20 15:52:50.781001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.034 [2024-07-20 15:52:50.792135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.034 [2024-07-20 15:52:50.792292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:16.034 [2024-07-20 15:52:50.792402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.034 [2024-07-20 15:52:50.792446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.034 [2024-07-20 15:52:50.800657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.034 [2024-07-20 15:52:50.800799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:16.034 [2024-07-20 15:52:50.800877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.034 [2024-07-20 15:52:50.800912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.034 [2024-07-20 15:52:50.800959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.034 [2024-07-20 15:52:50.800990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:16.034 [2024-07-20 15:52:50.801019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.034 [2024-07-20 15:52:50.801048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.034 [2024-07-20 15:52:50.801152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.034 [2024-07-20 15:52:50.801194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:16.034 [2024-07-20 15:52:50.801224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.034 [2024-07-20 15:52:50.801252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.034 [2024-07-20 15:52:50.801353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.034 [2024-07-20 15:52:50.801485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:16.034 [2024-07-20 15:52:50.801530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.034 [2024-07-20 15:52:50.801559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.034 [2024-07-20 15:52:50.801676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.034 [2024-07-20 15:52:50.801727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:16.034 [2024-07-20 15:52:50.801895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.034 [2024-07-20 15:52:50.801931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.034 [2024-07-20 15:52:50.801998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.034 [2024-07-20 15:52:50.802034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:16.034 [2024-07-20 15:52:50.802117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.034 [2024-07-20 15:52:50.802151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.034 [2024-07-20 15:52:50.802225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.034 [2024-07-20 15:52:50.802276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:16.034 [2024-07-20 15:52:50.802311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.034 [2024-07-20 15:52:50.802406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.035 [2024-07-20 15:52:50.802612] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.801 ms, result 0 00:17:16.614 00:17:16.614 00:17:16.614 15:52:51 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89015 00:17:16.614 15:52:51 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:16.614 15:52:51 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89015 00:17:16.614 15:52:51 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 89015 ']' 00:17:16.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.614 15:52:51 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.614 15:52:51 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:16.614 15:52:51 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.614 15:52:51 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:16.614 15:52:51 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:16.614 [2024-07-20 15:52:51.317986] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:16.614 [2024-07-20 15:52:51.318329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89015 ] 00:17:16.871 [2024-07-20 15:52:51.467830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.871 [2024-07-20 15:52:51.510285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.438 15:52:52 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:17.438 15:52:52 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:17:17.438 15:52:52 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:17.697 [2024-07-20 15:52:52.258403] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:17.697 [2024-07-20 15:52:52.258457] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:17.697 [2024-07-20 15:52:52.425391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.697 [2024-07-20 15:52:52.425572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:17.697 [2024-07-20 15:52:52.425687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:17.697 [2024-07-20 15:52:52.425726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.697 [2024-07-20 15:52:52.428275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.697 [2024-07-20 15:52:52.428448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:17.697 [2024-07-20 15:52:52.428576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:17:17.697 [2024-07-20 15:52:52.428615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.697 [2024-07-20 15:52:52.428721] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:17.697 [2024-07-20 15:52:52.429069] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:17.697 [2024-07-20 15:52:52.429225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.697 [2024-07-20 15:52:52.429305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:17.697 [2024-07-20 15:52:52.429365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:17:17.697 [2024-07-20 15:52:52.429499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.697 [2024-07-20 15:52:52.431143] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:17.697 [2024-07-20 15:52:52.433683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.697 [2024-07-20 15:52:52.433722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:17.697 [2024-07-20 15:52:52.433736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.549 ms 00:17:17.697 [2024-07-20 15:52:52.433750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.697 [2024-07-20 15:52:52.433829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.697 [2024-07-20 15:52:52.433856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:17.697 [2024-07-20 15:52:52.433878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:17.697 [2024-07-20 15:52:52.433893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.698 [2024-07-20 15:52:52.440596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.698 [2024-07-20 15:52:52.440624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:17.698 [2024-07-20 15:52:52.440636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.664 ms 00:17:17.698 [2024-07-20 15:52:52.440665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.698 [2024-07-20 15:52:52.440787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.698 [2024-07-20 15:52:52.440804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:17.698 [2024-07-20 15:52:52.440823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:17.698 [2024-07-20 15:52:52.440836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.698 [2024-07-20 15:52:52.440869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.698 [2024-07-20 15:52:52.440888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:17.698 [2024-07-20 15:52:52.440899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:17.698 [2024-07-20 15:52:52.440911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.698 [2024-07-20 15:52:52.440936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:17.698 [2024-07-20 15:52:52.442608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.698 [2024-07-20 15:52:52.442636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:17.698 [2024-07-20 15:52:52.442653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:17:17.698 [2024-07-20 15:52:52.442667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.698 [2024-07-20 15:52:52.442717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.698 [2024-07-20 15:52:52.442728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:17.698 [2024-07-20 15:52:52.442742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:17.698 [2024-07-20 15:52:52.442752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.698 [2024-07-20 15:52:52.442777] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:17.698 [2024-07-20 15:52:52.442799] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:17.698 [2024-07-20 15:52:52.442838] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:17.698 [2024-07-20 15:52:52.442859] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:17.698 [2024-07-20 15:52:52.442949] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:17.698 [2024-07-20 15:52:52.442963] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:17.698 [2024-07-20 15:52:52.442984] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:17.698 [2024-07-20 15:52:52.442999] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:17.698 [2024-07-20 15:52:52.443014] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:17.698 [2024-07-20 15:52:52.443026] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:17.698 [2024-07-20 15:52:52.443041] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:17.698 [2024-07-20 15:52:52.443051] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:17.698 [2024-07-20 15:52:52.443064] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:17.698 [2024-07-20 15:52:52.443077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.698 [2024-07-20 15:52:52.443090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:17.698 [2024-07-20 15:52:52.443101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:17:17.698 [2024-07-20 15:52:52.443123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.698 [2024-07-20 15:52:52.443213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.698 [2024-07-20 15:52:52.443226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:17.698 [2024-07-20 15:52:52.443243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:17.698 [2024-07-20 15:52:52.443255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.698 [2024-07-20 15:52:52.443347] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:17.698 [2024-07-20 15:52:52.443384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:17.698 [2024-07-20 15:52:52.443396] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:17.698 [2024-07-20 15:52:52.443409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.698 [2024-07-20 15:52:52.443421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:17.698 [2024-07-20 15:52:52.443435] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:17.698 [2024-07-20 15:52:52.443446] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:17.698 [2024-07-20 15:52:52.443458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:17.698 [2024-07-20 15:52:52.443469] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:17.698 [2024-07-20 15:52:52.443481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:17.698 [2024-07-20 15:52:52.443490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:17.698 [2024-07-20 15:52:52.443502] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:17.698 [2024-07-20 15:52:52.443512] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:17.698 [2024-07-20 15:52:52.443526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:17.698 [2024-07-20 15:52:52.443536] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:17.698 [2024-07-20 15:52:52.443548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.698 [2024-07-20 15:52:52.443558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:17.698 [2024-07-20 15:52:52.443570] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:17.698 [2024-07-20 15:52:52.443579] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.698 [2024-07-20 15:52:52.443591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:17.698 [2024-07-20 15:52:52.443601] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:17.698 [2024-07-20 15:52:52.443615] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.698 [2024-07-20 15:52:52.443624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:17.698 [2024-07-20 15:52:52.443636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:17.698 [2024-07-20 15:52:52.443645] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.698 [2024-07-20 15:52:52.443658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:17.698 [2024-07-20 15:52:52.443667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:17.698 [2024-07-20 15:52:52.443681] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.698 [2024-07-20 15:52:52.443690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:17.698 [2024-07-20 15:52:52.443702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:17.698 [2024-07-20 15:52:52.443711] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.698 [2024-07-20 15:52:52.443722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:17.698 [2024-07-20 15:52:52.443732] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:17.698 [2024-07-20 15:52:52.443743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:17.698 [2024-07-20 15:52:52.443753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:17.698 [2024-07-20 15:52:52.443764] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:17.698 [2024-07-20 15:52:52.443774] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:17.698 [2024-07-20 15:52:52.443788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:17.698 [2024-07-20 15:52:52.443797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:17.698 [2024-07-20 15:52:52.443824] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.698 [2024-07-20 15:52:52.443834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:17.698 [2024-07-20 15:52:52.443846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:17.698 [2024-07-20 15:52:52.443855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.698 [2024-07-20 15:52:52.443867] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:17.698 [2024-07-20 15:52:52.443877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:17.698 [2024-07-20 15:52:52.443897] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:17.698 [2024-07-20 15:52:52.443919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.698 [2024-07-20 15:52:52.443932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:17.698 [2024-07-20 15:52:52.443942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:17.698 [2024-07-20 15:52:52.443956] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:17.698 [2024-07-20 15:52:52.443966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:17.698 [2024-07-20 15:52:52.443978] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:17.698 [2024-07-20 15:52:52.443988] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:17.698 [2024-07-20 15:52:52.444003] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:17.698 [2024-07-20 15:52:52.444017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:17.698 [2024-07-20 15:52:52.444031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:17.698 [2024-07-20 15:52:52.444042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:17.698 [2024-07-20 15:52:52.444055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:17.698 [2024-07-20 15:52:52.444066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:17.698 [2024-07-20 15:52:52.444079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:17.698 [2024-07-20 15:52:52.444090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:17.698 [2024-07-20 15:52:52.444103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:17.698 [2024-07-20 15:52:52.444113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:17.698 [2024-07-20 15:52:52.444126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:17.698 [2024-07-20 15:52:52.444138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:17.698 [2024-07-20 15:52:52.444151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:17.699 [2024-07-20 15:52:52.444161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:17.699 [2024-07-20 15:52:52.444174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:17.699 [2024-07-20 15:52:52.444185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:17.699 [2024-07-20 15:52:52.444200] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:17.699 [2024-07-20 15:52:52.444212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:17.699 [2024-07-20 15:52:52.444228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:17.699 [2024-07-20 15:52:52.444239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:17.699 [2024-07-20 15:52:52.444253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:17.699 [2024-07-20 15:52:52.444264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:17.699 [2024-07-20 15:52:52.444278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.699 [2024-07-20 15:52:52.444289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:17.699 [2024-07-20 15:52:52.444303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:17:17.699 [2024-07-20 15:52:52.444313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.699 [2024-07-20 15:52:52.456248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.699 [2024-07-20 15:52:52.456283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:17.699 [2024-07-20 15:52:52.456299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.881 ms 00:17:17.699 [2024-07-20 15:52:52.456309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.699 [2024-07-20 15:52:52.456454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.699 [2024-07-20 15:52:52.456471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:17.699 [2024-07-20 15:52:52.456488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:17.699 [2024-07-20 15:52:52.456498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.699 [2024-07-20 15:52:52.467393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.699 [2024-07-20 15:52:52.467426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:17.699 [2024-07-20 15:52:52.467443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.887 ms 00:17:17.699 [2024-07-20 15:52:52.467453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.699 [2024-07-20 15:52:52.467520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.699 [2024-07-20 15:52:52.467532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:17.699 [2024-07-20 15:52:52.467545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:17.699 [2024-07-20 15:52:52.467563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.699 [2024-07-20 15:52:52.467984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.699 [2024-07-20 15:52:52.467997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:17.699 [2024-07-20 15:52:52.468010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:17:17.699 [2024-07-20 15:52:52.468020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.699 [2024-07-20 15:52:52.468135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.699 [2024-07-20 15:52:52.468150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:17.699 [2024-07-20 15:52:52.468165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:17.699 [2024-07-20 15:52:52.468175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.699 [2024-07-20 15:52:52.475300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.699 [2024-07-20 15:52:52.475333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:17.699 [2024-07-20 15:52:52.475348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.111 ms 00:17:17.699 [2024-07-20 15:52:52.475372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.699 [2024-07-20 15:52:52.477992] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:17.699 [2024-07-20 15:52:52.478033] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:17.699 [2024-07-20 15:52:52.478051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.699 [2024-07-20 15:52:52.478078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:17.699 [2024-07-20 15:52:52.478092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:17:17.699 [2024-07-20 15:52:52.478102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.699 [2024-07-20 15:52:52.490611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.699 [2024-07-20 15:52:52.490740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:17.699 [2024-07-20 15:52:52.490818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.448 ms 00:17:17.699 [2024-07-20 15:52:52.490854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.958 [2024-07-20 15:52:52.492503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.958 [2024-07-20 15:52:52.492625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:17.958 [2024-07-20 15:52:52.492699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.551 ms 00:17:17.958 [2024-07-20 15:52:52.492714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.958 [2024-07-20 15:52:52.494082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.958 [2024-07-20 15:52:52.494115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:17.958 [2024-07-20 15:52:52.494129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.326 ms 00:17:17.959 [2024-07-20 15:52:52.494139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.959 [2024-07-20 15:52:52.494453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.959 [2024-07-20 15:52:52.494475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:17.959 [2024-07-20 15:52:52.494489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:17:17.959 [2024-07-20 15:52:52.494499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.959 [2024-07-20 15:52:52.533103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.959 [2024-07-20 15:52:52.533174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:17.959 [2024-07-20 15:52:52.533200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.632 ms 00:17:17.959 [2024-07-20 15:52:52.533215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.959 [2024-07-20 15:52:52.540927] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:17.959 [2024-07-20 15:52:52.555770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.959 [2024-07-20 15:52:52.555810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:17.959 [2024-07-20 15:52:52.555825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.489 ms 00:17:17.959 [2024-07-20 15:52:52.555837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.959 [2024-07-20 15:52:52.555912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.959 [2024-07-20 15:52:52.555927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:17.959 [2024-07-20 15:52:52.555938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:17.959 [2024-07-20 15:52:52.555953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.959 [2024-07-20 15:52:52.556003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.959 [2024-07-20 15:52:52.556016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:17.959 [2024-07-20 15:52:52.556026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:17.959 [2024-07-20 15:52:52.556052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.959 [2024-07-20 15:52:52.556077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.959 [2024-07-20 15:52:52.556089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:17.959 [2024-07-20 15:52:52.556098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:17.959 [2024-07-20 15:52:52.556112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.959 [2024-07-20 15:52:52.556152] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:17.959 [2024-07-20 15:52:52.556166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.959 [2024-07-20 15:52:52.556182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:17.959 [2024-07-20 15:52:52.556194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:17.959 [2024-07-20 15:52:52.556203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.959 [2024-07-20 15:52:52.559918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.959 [2024-07-20 15:52:52.559952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:17.959 [2024-07-20 15:52:52.559969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.694 ms 00:17:17.959 [2024-07-20 15:52:52.559995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.959 [2024-07-20 15:52:52.560087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.959 [2024-07-20 15:52:52.560100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:17.959 [2024-07-20 15:52:52.560113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:17.959 [2024-07-20 15:52:52.560123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.959 [2024-07-20 15:52:52.561138] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:17.959 [2024-07-20 15:52:52.562068] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.713 ms, result 0 00:17:17.959 [2024-07-20 15:52:52.563079] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:17.959 Some configs were skipped because the RPC state that can call them passed over. 00:17:17.959 15:52:52 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:18.218 [2024-07-20 15:52:52.769682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.218 [2024-07-20 15:52:52.769739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:18.218 [2024-07-20 15:52:52.769755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.602 ms 00:17:18.218 [2024-07-20 15:52:52.769769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.218 [2024-07-20 15:52:52.769806] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.730 ms, result 0 00:17:18.218 true 00:17:18.218 15:52:52 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:18.218 [2024-07-20 15:52:52.965107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.218 [2024-07-20 15:52:52.965287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:18.218 [2024-07-20 15:52:52.965383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.183 ms 00:17:18.218 [2024-07-20 15:52:52.965424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.218 [2024-07-20 15:52:52.965519] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.600 ms, result 0 00:17:18.218 true 00:17:18.218 15:52:52 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89015 00:17:18.218 15:52:52 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89015 ']' 00:17:18.218 15:52:52 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89015 00:17:18.218 15:52:52 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:17:18.219 15:52:53 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:18.219 15:52:53 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89015 00:17:18.479 killing process with pid 89015 00:17:18.479 15:52:53 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:18.479 15:52:53 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:18.479 15:52:53 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89015' 00:17:18.479 15:52:53 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 89015 00:17:18.479 15:52:53 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 89015 00:17:18.479 [2024-07-20 15:52:53.171842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.479 [2024-07-20 15:52:53.171907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:18.479 [2024-07-20 15:52:53.171924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:18.479 [2024-07-20 15:52:53.171936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.479 [2024-07-20 15:52:53.171969] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:18.479 [2024-07-20 15:52:53.172678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.479 [2024-07-20 15:52:53.172691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:18.479 [2024-07-20 15:52:53.172704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:17:18.479 [2024-07-20 15:52:53.172715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.479 [2024-07-20 15:52:53.173005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.479 [2024-07-20 15:52:53.173018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:18.479 [2024-07-20 15:52:53.173030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:17:18.479 [2024-07-20 15:52:53.173040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.479 [2024-07-20 15:52:53.176433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.479 [2024-07-20 15:52:53.176469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:18.479 [2024-07-20 15:52:53.176486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.372 ms 00:17:18.479 [2024-07-20 15:52:53.176496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.479 [2024-07-20 15:52:53.182045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.479 [2024-07-20 15:52:53.182081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:18.479 [2024-07-20 15:52:53.182096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.514 ms 00:17:18.479 [2024-07-20 15:52:53.182106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.479 [2024-07-20 15:52:53.183634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.479 [2024-07-20 15:52:53.183757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:18.479 [2024-07-20 15:52:53.183831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.444 ms 00:17:18.479 [2024-07-20 15:52:53.183866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.479 [2024-07-20 15:52:53.187727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.479 [2024-07-20 15:52:53.187855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:18.479 [2024-07-20 15:52:53.187879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.743 ms 00:17:18.479 [2024-07-20 15:52:53.187895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.479 [2024-07-20 15:52:53.188053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.479 [2024-07-20 15:52:53.188067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:18.479 [2024-07-20 15:52:53.188081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:18.479 [2024-07-20 15:52:53.188091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.479 [2024-07-20 15:52:53.190095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.479 [2024-07-20 15:52:53.190130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:18.479 [2024-07-20 15:52:53.190144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.983 ms 00:17:18.479 [2024-07-20 15:52:53.190154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.479 [2024-07-20 15:52:53.191718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.479 [2024-07-20 15:52:53.191752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:18.479 [2024-07-20 15:52:53.191773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.509 ms 00:17:18.479 [2024-07-20 15:52:53.191782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.479 [2024-07-20 15:52:53.193035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.479 [2024-07-20 15:52:53.193069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:18.479 [2024-07-20 15:52:53.193083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.216 ms 00:17:18.479 [2024-07-20 15:52:53.193093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.479 [2024-07-20 15:52:53.194222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.479 [2024-07-20 15:52:53.194350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:18.479 [2024-07-20 15:52:53.194450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:17:18.479 [2024-07-20 15:52:53.194485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.479 [2024-07-20 15:52:53.194558] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:18.479 [2024-07-20 15:52:53.194653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.194996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:18.479 [2024-07-20 15:52:53.195250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:18.480 [2024-07-20 15:52:53.195919] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:18.480 [2024-07-20 15:52:53.195932] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f98535-0125-4c1a-b8ee-497ceb063ba4 00:17:18.480 [2024-07-20 15:52:53.195942] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:18.480 [2024-07-20 15:52:53.195954] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:18.480 [2024-07-20 15:52:53.195966] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:18.480 [2024-07-20 15:52:53.195979] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:18.480 [2024-07-20 15:52:53.195989] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:18.480 [2024-07-20 15:52:53.196001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:18.480 [2024-07-20 15:52:53.196017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:18.480 [2024-07-20 15:52:53.196029] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:18.480 [2024-07-20 15:52:53.196037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:18.480 [2024-07-20 15:52:53.196050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.480 [2024-07-20 15:52:53.196060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:18.480 [2024-07-20 15:52:53.196072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.497 ms 00:17:18.480 [2024-07-20 15:52:53.196082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.197797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.480 [2024-07-20 15:52:53.197820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:18.480 [2024-07-20 15:52:53.197834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.688 ms 00:17:18.480 [2024-07-20 15:52:53.197844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.197952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.480 [2024-07-20 15:52:53.197962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:18.480 [2024-07-20 15:52:53.197975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:18.480 [2024-07-20 15:52:53.197991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.204936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.480 [2024-07-20 15:52:53.204958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.480 [2024-07-20 15:52:53.204981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.480 [2024-07-20 15:52:53.204991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.205073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.480 [2024-07-20 15:52:53.205085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.480 [2024-07-20 15:52:53.205097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.480 [2024-07-20 15:52:53.205107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.205157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.480 [2024-07-20 15:52:53.205172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.480 [2024-07-20 15:52:53.205185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.480 [2024-07-20 15:52:53.205204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.205227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.480 [2024-07-20 15:52:53.205243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.480 [2024-07-20 15:52:53.205255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.480 [2024-07-20 15:52:53.205265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.217082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.480 [2024-07-20 15:52:53.217123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:18.480 [2024-07-20 15:52:53.217137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.480 [2024-07-20 15:52:53.217147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.225285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.480 [2024-07-20 15:52:53.225317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.480 [2024-07-20 15:52:53.225332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.480 [2024-07-20 15:52:53.225349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.225421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.480 [2024-07-20 15:52:53.225432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.480 [2024-07-20 15:52:53.225449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.480 [2024-07-20 15:52:53.225459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.225494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.480 [2024-07-20 15:52:53.225505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.480 [2024-07-20 15:52:53.225517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.480 [2024-07-20 15:52:53.225536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.225617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.480 [2024-07-20 15:52:53.225629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.480 [2024-07-20 15:52:53.225643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.480 [2024-07-20 15:52:53.225655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.225695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.480 [2024-07-20 15:52:53.225706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:18.480 [2024-07-20 15:52:53.225718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.480 [2024-07-20 15:52:53.225728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.225788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.480 [2024-07-20 15:52:53.225800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.480 [2024-07-20 15:52:53.225812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.480 [2024-07-20 15:52:53.225825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.225873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.480 [2024-07-20 15:52:53.225884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.480 [2024-07-20 15:52:53.225897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.480 [2024-07-20 15:52:53.225907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.480 [2024-07-20 15:52:53.226052] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.266 ms, result 0 00:17:18.739 15:52:53 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:18.739 15:52:53 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:18.998 [2024-07-20 15:52:53.557403] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:18.998 [2024-07-20 15:52:53.557515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89057 ] 00:17:18.998 [2024-07-20 15:52:53.708405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.998 [2024-07-20 15:52:53.750325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.257 [2024-07-20 15:52:53.851687] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:19.258 [2024-07-20 15:52:53.851753] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:19.258 [2024-07-20 15:52:54.002806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.258 [2024-07-20 15:52:54.002855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:19.258 [2024-07-20 15:52:54.002871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:19.258 [2024-07-20 15:52:54.002882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.258 [2024-07-20 15:52:54.005298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.258 [2024-07-20 15:52:54.005346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.258 [2024-07-20 15:52:54.005390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:17:19.258 [2024-07-20 15:52:54.005401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.258 [2024-07-20 15:52:54.005485] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:19.258 [2024-07-20 15:52:54.005699] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:19.258 [2024-07-20 15:52:54.005716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.258 [2024-07-20 15:52:54.005734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.258 [2024-07-20 15:52:54.005748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:17:19.258 [2024-07-20 15:52:54.005758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.258 [2024-07-20 15:52:54.007235] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:19.258 [2024-07-20 15:52:54.009662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.258 [2024-07-20 15:52:54.009694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:19.258 [2024-07-20 15:52:54.009707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.433 ms 00:17:19.258 [2024-07-20 15:52:54.009741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.258 [2024-07-20 15:52:54.009817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.258 [2024-07-20 15:52:54.009830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:19.258 [2024-07-20 15:52:54.009848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:19.258 [2024-07-20 15:52:54.009861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.258 [2024-07-20 15:52:54.016663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.258 [2024-07-20 15:52:54.016814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.258 [2024-07-20 15:52:54.016952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.770 ms 00:17:19.258 [2024-07-20 15:52:54.016991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.258 [2024-07-20 15:52:54.017144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.258 [2024-07-20 15:52:54.017211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.258 [2024-07-20 15:52:54.017285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:19.258 [2024-07-20 15:52:54.017321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.258 [2024-07-20 15:52:54.017410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.258 [2024-07-20 15:52:54.017456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:19.258 [2024-07-20 15:52:54.017573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:19.258 [2024-07-20 15:52:54.017664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.258 [2024-07-20 15:52:54.017713] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:19.258 [2024-07-20 15:52:54.019504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.258 [2024-07-20 15:52:54.019648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.258 [2024-07-20 15:52:54.019729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:17:19.258 [2024-07-20 15:52:54.019764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.258 [2024-07-20 15:52:54.019849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.258 [2024-07-20 15:52:54.019930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:19.258 [2024-07-20 15:52:54.020001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:19.258 [2024-07-20 15:52:54.020030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.258 [2024-07-20 15:52:54.020085] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:19.258 [2024-07-20 15:52:54.020131] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:19.258 [2024-07-20 15:52:54.020217] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:19.258 [2024-07-20 15:52:54.020368] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:19.258 [2024-07-20 15:52:54.020500] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:19.258 [2024-07-20 15:52:54.020538] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:19.258 [2024-07-20 15:52:54.020552] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:19.258 [2024-07-20 15:52:54.020567] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:19.258 [2024-07-20 15:52:54.020580] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:19.258 [2024-07-20 15:52:54.020592] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:19.258 [2024-07-20 15:52:54.020613] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:19.258 [2024-07-20 15:52:54.020623] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:19.258 [2024-07-20 15:52:54.020637] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:19.258 [2024-07-20 15:52:54.020648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.258 [2024-07-20 15:52:54.020658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:19.258 [2024-07-20 15:52:54.020670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:17:19.258 [2024-07-20 15:52:54.020680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.258 [2024-07-20 15:52:54.020761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.258 [2024-07-20 15:52:54.020773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:19.258 [2024-07-20 15:52:54.020784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:19.258 [2024-07-20 15:52:54.020794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.258 [2024-07-20 15:52:54.020899] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:19.258 [2024-07-20 15:52:54.020913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:19.258 [2024-07-20 15:52:54.020925] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.258 [2024-07-20 15:52:54.020936] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.258 [2024-07-20 15:52:54.020946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:19.258 [2024-07-20 15:52:54.020955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:19.258 [2024-07-20 15:52:54.020965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:19.258 [2024-07-20 15:52:54.020975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:19.258 [2024-07-20 15:52:54.020985] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:19.258 [2024-07-20 15:52:54.020995] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.258 [2024-07-20 15:52:54.021004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:19.258 [2024-07-20 15:52:54.021015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:19.258 [2024-07-20 15:52:54.021028] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.258 [2024-07-20 15:52:54.021038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:19.258 [2024-07-20 15:52:54.021048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:19.258 [2024-07-20 15:52:54.021057] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.258 [2024-07-20 15:52:54.021067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:19.258 [2024-07-20 15:52:54.021077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:19.258 [2024-07-20 15:52:54.021086] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.258 [2024-07-20 15:52:54.021096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:19.258 [2024-07-20 15:52:54.021106] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:19.258 [2024-07-20 15:52:54.021116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.258 [2024-07-20 15:52:54.021126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:19.258 [2024-07-20 15:52:54.021135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:19.258 [2024-07-20 15:52:54.021145] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.258 [2024-07-20 15:52:54.021154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:19.258 [2024-07-20 15:52:54.021163] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:19.258 [2024-07-20 15:52:54.021173] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.258 [2024-07-20 15:52:54.021187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:19.258 [2024-07-20 15:52:54.021197] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:19.258 [2024-07-20 15:52:54.021206] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.258 [2024-07-20 15:52:54.021215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:19.258 [2024-07-20 15:52:54.021225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:19.258 [2024-07-20 15:52:54.021235] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.258 [2024-07-20 15:52:54.021244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:19.258 [2024-07-20 15:52:54.021254] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:19.258 [2024-07-20 15:52:54.021263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.258 [2024-07-20 15:52:54.021273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:19.258 [2024-07-20 15:52:54.021283] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:19.258 [2024-07-20 15:52:54.021293] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.258 [2024-07-20 15:52:54.021302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:19.258 [2024-07-20 15:52:54.021311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:19.258 [2024-07-20 15:52:54.021321] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.258 [2024-07-20 15:52:54.021330] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:19.258 [2024-07-20 15:52:54.021344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:19.258 [2024-07-20 15:52:54.021370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.258 [2024-07-20 15:52:54.021381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.259 [2024-07-20 15:52:54.021393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:19.259 [2024-07-20 15:52:54.021403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:19.259 [2024-07-20 15:52:54.021412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:19.259 [2024-07-20 15:52:54.021422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:19.259 [2024-07-20 15:52:54.021431] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:19.259 [2024-07-20 15:52:54.021442] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:19.259 [2024-07-20 15:52:54.021452] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:19.259 [2024-07-20 15:52:54.021465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.259 [2024-07-20 15:52:54.021478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:19.259 [2024-07-20 15:52:54.021489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:19.259 [2024-07-20 15:52:54.021500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:19.259 [2024-07-20 15:52:54.021511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:19.259 [2024-07-20 15:52:54.021522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:19.259 [2024-07-20 15:52:54.021535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:19.259 [2024-07-20 15:52:54.021546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:19.259 [2024-07-20 15:52:54.021561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:19.259 [2024-07-20 15:52:54.021572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:19.259 [2024-07-20 15:52:54.021582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:19.259 [2024-07-20 15:52:54.021593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:19.259 [2024-07-20 15:52:54.021603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:19.259 [2024-07-20 15:52:54.021614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:19.259 [2024-07-20 15:52:54.021624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:19.259 [2024-07-20 15:52:54.021635] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:19.259 [2024-07-20 15:52:54.021649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.259 [2024-07-20 15:52:54.021670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:19.259 [2024-07-20 15:52:54.021682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:19.259 [2024-07-20 15:52:54.021692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:19.259 [2024-07-20 15:52:54.021703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:19.259 [2024-07-20 15:52:54.021715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.259 [2024-07-20 15:52:54.021728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:19.259 [2024-07-20 15:52:54.021739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:17:19.259 [2024-07-20 15:52:54.021749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.259 [2024-07-20 15:52:54.041087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.259 [2024-07-20 15:52:54.041138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.259 [2024-07-20 15:52:54.041155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.300 ms 00:17:19.259 [2024-07-20 15:52:54.041172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.259 [2024-07-20 15:52:54.041314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.259 [2024-07-20 15:52:54.041329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:19.259 [2024-07-20 15:52:54.041343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:19.259 [2024-07-20 15:52:54.041371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.518 [2024-07-20 15:52:54.052335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.518 [2024-07-20 15:52:54.052397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.518 [2024-07-20 15:52:54.052413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.951 ms 00:17:19.518 [2024-07-20 15:52:54.052427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.518 [2024-07-20 15:52:54.052489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.518 [2024-07-20 15:52:54.052501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.518 [2024-07-20 15:52:54.052513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:19.518 [2024-07-20 15:52:54.052523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.518 [2024-07-20 15:52:54.052944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.518 [2024-07-20 15:52:54.052960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.518 [2024-07-20 15:52:54.052971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:17:19.518 [2024-07-20 15:52:54.052982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.518 [2024-07-20 15:52:54.053100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.518 [2024-07-20 15:52:54.053113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.518 [2024-07-20 15:52:54.053123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:19.518 [2024-07-20 15:52:54.053133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.518 [2024-07-20 15:52:54.059586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.518 [2024-07-20 15:52:54.059759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.518 [2024-07-20 15:52:54.059882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.440 ms 00:17:19.518 [2024-07-20 15:52:54.059920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.518 [2024-07-20 15:52:54.062573] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:19.519 [2024-07-20 15:52:54.062725] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:19.519 [2024-07-20 15:52:54.062816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.519 [2024-07-20 15:52:54.062853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:19.519 [2024-07-20 15:52:54.062884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.778 ms 00:17:19.519 [2024-07-20 15:52:54.062913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.519 [2024-07-20 15:52:54.075043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.519 [2024-07-20 15:52:54.075183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:19.519 [2024-07-20 15:52:54.075267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.068 ms 00:17:19.519 [2024-07-20 15:52:54.075308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.519 [2024-07-20 15:52:54.076987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.519 [2024-07-20 15:52:54.077108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:19.519 [2024-07-20 15:52:54.077176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:17:19.519 [2024-07-20 15:52:54.077210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.519 [2024-07-20 15:52:54.078681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.519 [2024-07-20 15:52:54.078795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:19.519 [2024-07-20 15:52:54.078813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:17:19.519 [2024-07-20 15:52:54.078823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.519 [2024-07-20 15:52:54.079110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.519 [2024-07-20 15:52:54.079130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:19.519 [2024-07-20 15:52:54.079150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:17:19.519 [2024-07-20 15:52:54.079160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.519 [2024-07-20 15:52:54.098781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.519 [2024-07-20 15:52:54.098843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:19.519 [2024-07-20 15:52:54.098860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.627 ms 00:17:19.519 [2024-07-20 15:52:54.098870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.519 [2024-07-20 15:52:54.104959] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:19.519 [2024-07-20 15:52:54.120509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.519 [2024-07-20 15:52:54.120546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:19.519 [2024-07-20 15:52:54.120560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.592 ms 00:17:19.519 [2024-07-20 15:52:54.120569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.519 [2024-07-20 15:52:54.120653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.519 [2024-07-20 15:52:54.120666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:19.519 [2024-07-20 15:52:54.120680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:19.519 [2024-07-20 15:52:54.120689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.519 [2024-07-20 15:52:54.120741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.519 [2024-07-20 15:52:54.120752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:19.519 [2024-07-20 15:52:54.120761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:19.519 [2024-07-20 15:52:54.120770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.519 [2024-07-20 15:52:54.120792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.519 [2024-07-20 15:52:54.120802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:19.519 [2024-07-20 15:52:54.120824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:19.519 [2024-07-20 15:52:54.120837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.519 [2024-07-20 15:52:54.120871] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:19.519 [2024-07-20 15:52:54.120883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.519 [2024-07-20 15:52:54.120892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:19.519 [2024-07-20 15:52:54.120901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:19.519 [2024-07-20 15:52:54.120910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.519 [2024-07-20 15:52:54.124656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.519 [2024-07-20 15:52:54.124689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:19.519 [2024-07-20 15:52:54.124701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.732 ms 00:17:19.519 [2024-07-20 15:52:54.124739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.519 [2024-07-20 15:52:54.124824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.519 [2024-07-20 15:52:54.124837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:19.519 [2024-07-20 15:52:54.124856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:19.519 [2024-07-20 15:52:54.124873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.519 [2024-07-20 15:52:54.125876] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:19.519 [2024-07-20 15:52:54.126831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 122.994 ms, result 0 00:17:19.519 [2024-07-20 15:52:54.127531] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:19.519 [2024-07-20 15:52:54.137224] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.336  Copying: 28/256 [MB] (28 MBps) Copying: 54/256 [MB] (25 MBps) Copying: 79/256 [MB] (25 MBps) Copying: 104/256 [MB] (24 MBps) Copying: 129/256 [MB] (24 MBps) Copying: 154/256 [MB] (25 MBps) Copying: 181/256 [MB] (26 MBps) Copying: 206/256 [MB] (25 MBps) Copying: 231/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-20 15:53:04.062462] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.336 [2024-07-20 15:53:04.063796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.336 [2024-07-20 15:53:04.063837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:29.336 [2024-07-20 15:53:04.063851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:29.336 [2024-07-20 15:53:04.063862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.336 [2024-07-20 15:53:04.063890] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:29.336 [2024-07-20 15:53:04.064550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.336 [2024-07-20 15:53:04.064572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:29.336 [2024-07-20 15:53:04.064582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:17:29.336 [2024-07-20 15:53:04.064592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.336 [2024-07-20 15:53:04.064812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.336 [2024-07-20 15:53:04.064828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:29.336 [2024-07-20 15:53:04.064843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:17:29.336 [2024-07-20 15:53:04.064852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.336 [2024-07-20 15:53:04.067710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.336 [2024-07-20 15:53:04.067733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:29.336 [2024-07-20 15:53:04.067743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.846 ms 00:17:29.336 [2024-07-20 15:53:04.067753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.336 [2024-07-20 15:53:04.073270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.336 [2024-07-20 15:53:04.073300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:29.336 [2024-07-20 15:53:04.073311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.509 ms 00:17:29.336 [2024-07-20 15:53:04.073325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.336 [2024-07-20 15:53:04.074805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.336 [2024-07-20 15:53:04.074840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:29.336 [2024-07-20 15:53:04.074852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:17:29.336 [2024-07-20 15:53:04.074861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.336 [2024-07-20 15:53:04.078761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.336 [2024-07-20 15:53:04.078797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:29.336 [2024-07-20 15:53:04.078809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.876 ms 00:17:29.336 [2024-07-20 15:53:04.078829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.336 [2024-07-20 15:53:04.078938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.336 [2024-07-20 15:53:04.078950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:29.336 [2024-07-20 15:53:04.078964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:29.336 [2024-07-20 15:53:04.078974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.336 [2024-07-20 15:53:04.081073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.336 [2024-07-20 15:53:04.081104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:29.336 [2024-07-20 15:53:04.081115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.085 ms 00:17:29.336 [2024-07-20 15:53:04.081125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.336 [2024-07-20 15:53:04.082769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.336 [2024-07-20 15:53:04.082800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:29.336 [2024-07-20 15:53:04.082811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.617 ms 00:17:29.336 [2024-07-20 15:53:04.082820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.336 [2024-07-20 15:53:04.084006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.336 [2024-07-20 15:53:04.084038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:29.336 [2024-07-20 15:53:04.084050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:17:29.336 [2024-07-20 15:53:04.084058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.336 [2024-07-20 15:53:04.085222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.336 [2024-07-20 15:53:04.085256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:29.336 [2024-07-20 15:53:04.085267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:17:29.336 [2024-07-20 15:53:04.085275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.336 [2024-07-20 15:53:04.085302] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:29.336 [2024-07-20 15:53:04.085318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.085992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:29.337 [2024-07-20 15:53:04.086362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:29.338 [2024-07-20 15:53:04.086669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:29.338 [2024-07-20 15:53:04.086732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:29.338 [2024-07-20 15:53:04.086779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:29.338 [2024-07-20 15:53:04.086826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:29.338 [2024-07-20 15:53:04.086978] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:29.338 [2024-07-20 15:53:04.087014] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f98535-0125-4c1a-b8ee-497ceb063ba4 00:17:29.338 [2024-07-20 15:53:04.087167] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:29.338 [2024-07-20 15:53:04.087240] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:29.338 [2024-07-20 15:53:04.087314] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:29.338 [2024-07-20 15:53:04.087343] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:29.338 [2024-07-20 15:53:04.087396] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:29.338 [2024-07-20 15:53:04.087430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:29.338 [2024-07-20 15:53:04.087459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:29.338 [2024-07-20 15:53:04.087486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:29.338 [2024-07-20 15:53:04.087514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:29.338 [2024-07-20 15:53:04.087603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.338 [2024-07-20 15:53:04.087646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:29.338 [2024-07-20 15:53:04.087676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.305 ms 00:17:29.338 [2024-07-20 15:53:04.087712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.089424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.338 [2024-07-20 15:53:04.089528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:29.338 [2024-07-20 15:53:04.089655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:17:29.338 [2024-07-20 15:53:04.089675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.089782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.338 [2024-07-20 15:53:04.089800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:29.338 [2024-07-20 15:53:04.089811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:29.338 [2024-07-20 15:53:04.089827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.096063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.338 [2024-07-20 15:53:04.096082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.338 [2024-07-20 15:53:04.096096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.338 [2024-07-20 15:53:04.096106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.096167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.338 [2024-07-20 15:53:04.096177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.338 [2024-07-20 15:53:04.096187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.338 [2024-07-20 15:53:04.096196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.096236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.338 [2024-07-20 15:53:04.096247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.338 [2024-07-20 15:53:04.096256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.338 [2024-07-20 15:53:04.096265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.096286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.338 [2024-07-20 15:53:04.096296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.338 [2024-07-20 15:53:04.096305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.338 [2024-07-20 15:53:04.096314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.107410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.338 [2024-07-20 15:53:04.107451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.338 [2024-07-20 15:53:04.107463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.338 [2024-07-20 15:53:04.107494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.115686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.338 [2024-07-20 15:53:04.115721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.338 [2024-07-20 15:53:04.115733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.338 [2024-07-20 15:53:04.115743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.115778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.338 [2024-07-20 15:53:04.115788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.338 [2024-07-20 15:53:04.115798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.338 [2024-07-20 15:53:04.115807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.115838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.338 [2024-07-20 15:53:04.115848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.338 [2024-07-20 15:53:04.115858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.338 [2024-07-20 15:53:04.115867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.115935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.338 [2024-07-20 15:53:04.115947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.338 [2024-07-20 15:53:04.115957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.338 [2024-07-20 15:53:04.115966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.116004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.338 [2024-07-20 15:53:04.116019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:29.338 [2024-07-20 15:53:04.116029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.338 [2024-07-20 15:53:04.116038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.116077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.338 [2024-07-20 15:53:04.116088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.338 [2024-07-20 15:53:04.116098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.338 [2024-07-20 15:53:04.116107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.116158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.338 [2024-07-20 15:53:04.116170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.338 [2024-07-20 15:53:04.116179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.338 [2024-07-20 15:53:04.116198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.338 [2024-07-20 15:53:04.116327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.607 ms, result 0 00:17:29.597 00:17:29.597 00:17:29.597 15:53:04 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:29.854 15:53:04 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:30.111 15:53:04 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:30.111 [2024-07-20 15:53:04.889535] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:30.111 [2024-07-20 15:53:04.889649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89180 ] 00:17:30.368 [2024-07-20 15:53:05.041526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.368 [2024-07-20 15:53:05.087483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.627 [2024-07-20 15:53:05.188630] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:30.627 [2024-07-20 15:53:05.188709] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:30.627 [2024-07-20 15:53:05.339634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.627 [2024-07-20 15:53:05.339679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:30.627 [2024-07-20 15:53:05.339693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:30.627 [2024-07-20 15:53:05.339703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.627 [2024-07-20 15:53:05.342109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.627 [2024-07-20 15:53:05.342146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.627 [2024-07-20 15:53:05.342159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.379 ms 00:17:30.627 [2024-07-20 15:53:05.342185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.627 [2024-07-20 15:53:05.342260] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:30.627 [2024-07-20 15:53:05.342495] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:30.627 [2024-07-20 15:53:05.342521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.627 [2024-07-20 15:53:05.342532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.627 [2024-07-20 15:53:05.342547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:17:30.627 [2024-07-20 15:53:05.342563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.627 [2024-07-20 15:53:05.343986] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:30.627 [2024-07-20 15:53:05.346395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.627 [2024-07-20 15:53:05.346428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:30.627 [2024-07-20 15:53:05.346441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.413 ms 00:17:30.627 [2024-07-20 15:53:05.346451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.627 [2024-07-20 15:53:05.346517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.627 [2024-07-20 15:53:05.346530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:30.627 [2024-07-20 15:53:05.346541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:30.627 [2024-07-20 15:53:05.346553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.627 [2024-07-20 15:53:05.353183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.627 [2024-07-20 15:53:05.353209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.627 [2024-07-20 15:53:05.353220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.598 ms 00:17:30.627 [2024-07-20 15:53:05.353230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.627 [2024-07-20 15:53:05.353340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.627 [2024-07-20 15:53:05.353366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.627 [2024-07-20 15:53:05.353394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:30.627 [2024-07-20 15:53:05.353407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.627 [2024-07-20 15:53:05.353438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.627 [2024-07-20 15:53:05.353453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:30.627 [2024-07-20 15:53:05.353463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:30.627 [2024-07-20 15:53:05.353473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.627 [2024-07-20 15:53:05.353500] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:30.627 [2024-07-20 15:53:05.355099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.627 [2024-07-20 15:53:05.355134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.627 [2024-07-20 15:53:05.355150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.613 ms 00:17:30.627 [2024-07-20 15:53:05.355159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.627 [2024-07-20 15:53:05.355198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.627 [2024-07-20 15:53:05.355210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:30.627 [2024-07-20 15:53:05.355220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:30.627 [2024-07-20 15:53:05.355236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.627 [2024-07-20 15:53:05.355270] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:30.627 [2024-07-20 15:53:05.355293] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:30.627 [2024-07-20 15:53:05.355337] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:30.627 [2024-07-20 15:53:05.355384] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:30.627 [2024-07-20 15:53:05.355484] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:30.627 [2024-07-20 15:53:05.355505] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:30.627 [2024-07-20 15:53:05.355517] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:30.627 [2024-07-20 15:53:05.355541] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:30.627 [2024-07-20 15:53:05.355552] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:30.627 [2024-07-20 15:53:05.355570] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:30.627 [2024-07-20 15:53:05.355586] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:30.627 [2024-07-20 15:53:05.355595] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:30.627 [2024-07-20 15:53:05.355608] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:30.627 [2024-07-20 15:53:05.355619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.627 [2024-07-20 15:53:05.355629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:30.627 [2024-07-20 15:53:05.355639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:17:30.627 [2024-07-20 15:53:05.355648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.627 [2024-07-20 15:53:05.355725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.627 [2024-07-20 15:53:05.355736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:30.627 [2024-07-20 15:53:05.355746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:30.627 [2024-07-20 15:53:05.355755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.628 [2024-07-20 15:53:05.355839] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:30.628 [2024-07-20 15:53:05.355852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:30.628 [2024-07-20 15:53:05.355862] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.628 [2024-07-20 15:53:05.355872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.628 [2024-07-20 15:53:05.355882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:30.628 [2024-07-20 15:53:05.355890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:30.628 [2024-07-20 15:53:05.355900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:30.628 [2024-07-20 15:53:05.355909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:30.628 [2024-07-20 15:53:05.355919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:30.628 [2024-07-20 15:53:05.355928] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.628 [2024-07-20 15:53:05.355937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:30.628 [2024-07-20 15:53:05.355946] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:30.628 [2024-07-20 15:53:05.355957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.628 [2024-07-20 15:53:05.355967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:30.628 [2024-07-20 15:53:05.355977] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:30.628 [2024-07-20 15:53:05.355986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.628 [2024-07-20 15:53:05.355995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:30.628 [2024-07-20 15:53:05.356004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:30.628 [2024-07-20 15:53:05.356012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.628 [2024-07-20 15:53:05.356021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:30.628 [2024-07-20 15:53:05.356030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:30.628 [2024-07-20 15:53:05.356039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.628 [2024-07-20 15:53:05.356048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:30.628 [2024-07-20 15:53:05.356057] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:30.628 [2024-07-20 15:53:05.356066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.628 [2024-07-20 15:53:05.356075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:30.628 [2024-07-20 15:53:05.356083] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:30.628 [2024-07-20 15:53:05.356092] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.628 [2024-07-20 15:53:05.356106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:30.628 [2024-07-20 15:53:05.356115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:30.628 [2024-07-20 15:53:05.356123] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.628 [2024-07-20 15:53:05.356132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:30.628 [2024-07-20 15:53:05.356141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:30.628 [2024-07-20 15:53:05.356150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.628 [2024-07-20 15:53:05.356158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:30.628 [2024-07-20 15:53:05.356168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:30.628 [2024-07-20 15:53:05.356176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.628 [2024-07-20 15:53:05.356185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:30.628 [2024-07-20 15:53:05.356194] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:30.628 [2024-07-20 15:53:05.356202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.628 [2024-07-20 15:53:05.356211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:30.628 [2024-07-20 15:53:05.356220] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:30.628 [2024-07-20 15:53:05.356229] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.628 [2024-07-20 15:53:05.356238] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:30.628 [2024-07-20 15:53:05.356250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:30.628 [2024-07-20 15:53:05.356266] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.628 [2024-07-20 15:53:05.356276] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.628 [2024-07-20 15:53:05.356286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:30.628 [2024-07-20 15:53:05.356295] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:30.628 [2024-07-20 15:53:05.356304] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:30.628 [2024-07-20 15:53:05.356313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:30.628 [2024-07-20 15:53:05.356322] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:30.628 [2024-07-20 15:53:05.356331] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:30.628 [2024-07-20 15:53:05.356342] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:30.628 [2024-07-20 15:53:05.356353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.628 [2024-07-20 15:53:05.356364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:30.628 [2024-07-20 15:53:05.356375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:30.628 [2024-07-20 15:53:05.356385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:30.628 [2024-07-20 15:53:05.356658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:30.628 [2024-07-20 15:53:05.356709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:30.628 [2024-07-20 15:53:05.356760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:30.628 [2024-07-20 15:53:05.356806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:30.628 [2024-07-20 15:53:05.356852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:30.628 [2024-07-20 15:53:05.356897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:30.628 [2024-07-20 15:53:05.357105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:30.628 [2024-07-20 15:53:05.357220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:30.628 [2024-07-20 15:53:05.357325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:30.628 [2024-07-20 15:53:05.357383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:30.628 [2024-07-20 15:53:05.357430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:30.628 [2024-07-20 15:53:05.357476] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:30.628 [2024-07-20 15:53:05.357527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.628 [2024-07-20 15:53:05.357702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:30.628 [2024-07-20 15:53:05.357802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:30.628 [2024-07-20 15:53:05.357959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:30.628 [2024-07-20 15:53:05.357991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:30.628 [2024-07-20 15:53:05.358004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.628 [2024-07-20 15:53:05.358018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:30.628 [2024-07-20 15:53:05.358029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.217 ms 00:17:30.629 [2024-07-20 15:53:05.358040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.629 [2024-07-20 15:53:05.377902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.629 [2024-07-20 15:53:05.377934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.629 [2024-07-20 15:53:05.377946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.818 ms 00:17:30.629 [2024-07-20 15:53:05.377960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.629 [2024-07-20 15:53:05.378071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.629 [2024-07-20 15:53:05.378083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:30.629 [2024-07-20 15:53:05.378092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:30.629 [2024-07-20 15:53:05.378115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.629 [2024-07-20 15:53:05.388810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.629 [2024-07-20 15:53:05.388848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.629 [2024-07-20 15:53:05.388863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.689 ms 00:17:30.629 [2024-07-20 15:53:05.388889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.629 [2024-07-20 15:53:05.388961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.629 [2024-07-20 15:53:05.388975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.629 [2024-07-20 15:53:05.388988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:30.629 [2024-07-20 15:53:05.389008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.629 [2024-07-20 15:53:05.389504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.629 [2024-07-20 15:53:05.389525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.629 [2024-07-20 15:53:05.389546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:17:30.629 [2024-07-20 15:53:05.389557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.629 [2024-07-20 15:53:05.389697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.629 [2024-07-20 15:53:05.389712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.629 [2024-07-20 15:53:05.389733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:17:30.629 [2024-07-20 15:53:05.389744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.629 [2024-07-20 15:53:05.396052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.629 [2024-07-20 15:53:05.396082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.629 [2024-07-20 15:53:05.396095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.292 ms 00:17:30.629 [2024-07-20 15:53:05.396104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.629 [2024-07-20 15:53:05.398709] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:30.629 [2024-07-20 15:53:05.398742] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:30.629 [2024-07-20 15:53:05.398758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.629 [2024-07-20 15:53:05.398771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:30.629 [2024-07-20 15:53:05.398782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.565 ms 00:17:30.629 [2024-07-20 15:53:05.398793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.629 [2024-07-20 15:53:05.410963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.629 [2024-07-20 15:53:05.411012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:30.629 [2024-07-20 15:53:05.411025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.141 ms 00:17:30.629 [2024-07-20 15:53:05.411057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.629 [2024-07-20 15:53:05.412866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.629 [2024-07-20 15:53:05.412897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:30.629 [2024-07-20 15:53:05.412909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:17:30.629 [2024-07-20 15:53:05.412918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.629 [2024-07-20 15:53:05.414521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.629 [2024-07-20 15:53:05.414552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:30.629 [2024-07-20 15:53:05.414563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.561 ms 00:17:30.629 [2024-07-20 15:53:05.414573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.629 [2024-07-20 15:53:05.414858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.629 [2024-07-20 15:53:05.414879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:30.629 [2024-07-20 15:53:05.414891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:17:30.629 [2024-07-20 15:53:05.414901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.434942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.435013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:30.889 [2024-07-20 15:53:05.435031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.047 ms 00:17:30.889 [2024-07-20 15:53:05.435042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.441092] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:30.889 [2024-07-20 15:53:05.456743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.456784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:30.889 [2024-07-20 15:53:05.456800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.662 ms 00:17:30.889 [2024-07-20 15:53:05.456810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.456898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.456910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:30.889 [2024-07-20 15:53:05.456925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:30.889 [2024-07-20 15:53:05.456934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.456987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.456997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:30.889 [2024-07-20 15:53:05.457008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:30.889 [2024-07-20 15:53:05.457018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.457054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.457064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:30.889 [2024-07-20 15:53:05.457083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:30.889 [2024-07-20 15:53:05.457095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.457128] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:30.889 [2024-07-20 15:53:05.457139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.457148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:30.889 [2024-07-20 15:53:05.457157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:30.889 [2024-07-20 15:53:05.457167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.460849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.460883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:30.889 [2024-07-20 15:53:05.460896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.668 ms 00:17:30.889 [2024-07-20 15:53:05.460927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.461013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.461036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:30.889 [2024-07-20 15:53:05.461048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:30.889 [2024-07-20 15:53:05.461058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.462058] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:30.889 [2024-07-20 15:53:05.463026] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 122.342 ms, result 0 00:17:30.889 [2024-07-20 15:53:05.463660] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:30.889 [2024-07-20 15:53:05.473421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:30.889  Copying: 4096/4096 [kB] (average 22 MBps)[2024-07-20 15:53:05.648762] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:30.889 [2024-07-20 15:53:05.649444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.649469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:30.889 [2024-07-20 15:53:05.649481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:30.889 [2024-07-20 15:53:05.649491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.649511] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:30.889 [2024-07-20 15:53:05.650151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.650167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:30.889 [2024-07-20 15:53:05.650177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:17:30.889 [2024-07-20 15:53:05.650187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.651929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.651963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:30.889 [2024-07-20 15:53:05.651981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.726 ms 00:17:30.889 [2024-07-20 15:53:05.651991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.655139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.655168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:30.889 [2024-07-20 15:53:05.655179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.135 ms 00:17:30.889 [2024-07-20 15:53:05.655188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.660879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.660909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:30.889 [2024-07-20 15:53:05.660926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.671 ms 00:17:30.889 [2024-07-20 15:53:05.660936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.662325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.662371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:30.889 [2024-07-20 15:53:05.662383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.343 ms 00:17:30.889 [2024-07-20 15:53:05.662392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.666003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.666047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:30.889 [2024-07-20 15:53:05.666059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.586 ms 00:17:30.889 [2024-07-20 15:53:05.666084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.666189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.666202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:30.889 [2024-07-20 15:53:05.666224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:30.889 [2024-07-20 15:53:05.666233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.668237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.668272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:30.889 [2024-07-20 15:53:05.668283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.989 ms 00:17:30.889 [2024-07-20 15:53:05.668292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.889 [2024-07-20 15:53:05.669829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.889 [2024-07-20 15:53:05.669860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:30.889 [2024-07-20 15:53:05.669871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.509 ms 00:17:30.890 [2024-07-20 15:53:05.669880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.890 [2024-07-20 15:53:05.671033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.890 [2024-07-20 15:53:05.671066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:30.890 [2024-07-20 15:53:05.671078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:17:30.890 [2024-07-20 15:53:05.671087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.890 [2024-07-20 15:53:05.672341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.890 [2024-07-20 15:53:05.672388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:30.890 [2024-07-20 15:53:05.672400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:17:30.890 [2024-07-20 15:53:05.672409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.890 [2024-07-20 15:53:05.672437] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:30.890 [2024-07-20 15:53:05.672462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.672994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.673005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.673015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.673025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.673035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.673046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.673056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.673066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.673077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.673087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.673097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.673107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.673117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.673127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:30.890 [2024-07-20 15:53:05.673138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:30.891 [2024-07-20 15:53:05.673539] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:30.891 [2024-07-20 15:53:05.673548] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f98535-0125-4c1a-b8ee-497ceb063ba4 00:17:30.891 [2024-07-20 15:53:05.673559] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:30.891 [2024-07-20 15:53:05.673569] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:30.891 [2024-07-20 15:53:05.673578] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:30.891 [2024-07-20 15:53:05.673588] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:30.891 [2024-07-20 15:53:05.673604] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:30.891 [2024-07-20 15:53:05.673617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:30.891 [2024-07-20 15:53:05.673626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:30.891 [2024-07-20 15:53:05.673635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:30.891 [2024-07-20 15:53:05.673643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:30.891 [2024-07-20 15:53:05.673653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.891 [2024-07-20 15:53:05.673670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:30.891 [2024-07-20 15:53:05.673680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:17:30.891 [2024-07-20 15:53:05.673693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.891 [2024-07-20 15:53:05.675484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.891 [2024-07-20 15:53:05.675509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:30.891 [2024-07-20 15:53:05.675521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:17:30.891 [2024-07-20 15:53:05.675535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.891 [2024-07-20 15:53:05.675641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.891 [2024-07-20 15:53:05.675652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:30.891 [2024-07-20 15:53:05.675662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:30.891 [2024-07-20 15:53:05.675672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.891 [2024-07-20 15:53:05.681984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.891 [2024-07-20 15:53:05.682089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.891 [2024-07-20 15:53:05.682163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.891 [2024-07-20 15:53:05.682199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.891 [2024-07-20 15:53:05.682292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.891 [2024-07-20 15:53:05.682329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.891 [2024-07-20 15:53:05.682388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.891 [2024-07-20 15:53:05.682418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.891 [2024-07-20 15:53:05.682486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.150 [2024-07-20 15:53:05.682591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.150 [2024-07-20 15:53:05.682660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.150 [2024-07-20 15:53:05.682694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.150 [2024-07-20 15:53:05.682733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.150 [2024-07-20 15:53:05.682764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.150 [2024-07-20 15:53:05.682792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.150 [2024-07-20 15:53:05.682821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.150 [2024-07-20 15:53:05.694036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.150 [2024-07-20 15:53:05.694216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.150 [2024-07-20 15:53:05.694422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.150 [2024-07-20 15:53:05.694467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.150 [2024-07-20 15:53:05.702627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.150 [2024-07-20 15:53:05.702762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.150 [2024-07-20 15:53:05.702848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.150 [2024-07-20 15:53:05.702882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.150 [2024-07-20 15:53:05.702929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.150 [2024-07-20 15:53:05.702974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.150 [2024-07-20 15:53:05.703003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.150 [2024-07-20 15:53:05.703031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.150 [2024-07-20 15:53:05.703086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.150 [2024-07-20 15:53:05.703116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.150 [2024-07-20 15:53:05.703200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.150 [2024-07-20 15:53:05.703236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.150 [2024-07-20 15:53:05.703343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.150 [2024-07-20 15:53:05.703509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.150 [2024-07-20 15:53:05.703613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.150 [2024-07-20 15:53:05.703646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.150 [2024-07-20 15:53:05.703717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.150 [2024-07-20 15:53:05.703757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:31.150 [2024-07-20 15:53:05.703847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.150 [2024-07-20 15:53:05.703881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.150 [2024-07-20 15:53:05.703943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.150 [2024-07-20 15:53:05.703975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.150 [2024-07-20 15:53:05.704004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.150 [2024-07-20 15:53:05.704078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.150 [2024-07-20 15:53:05.704159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.150 [2024-07-20 15:53:05.704201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.150 [2024-07-20 15:53:05.704232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.150 [2024-07-20 15:53:05.704337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.150 [2024-07-20 15:53:05.704522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.124 ms, result 0 00:17:31.150 00:17:31.150 00:17:31.408 15:53:05 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=89194 00:17:31.408 15:53:05 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:31.408 15:53:05 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 89194 00:17:31.408 15:53:05 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 89194 ']' 00:17:31.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.408 15:53:05 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.408 15:53:05 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:31.408 15:53:05 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.408 15:53:05 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:31.408 15:53:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:31.408 [2024-07-20 15:53:06.058185] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:31.408 [2024-07-20 15:53:06.058335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89194 ] 00:17:31.666 [2024-07-20 15:53:06.208906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.666 [2024-07-20 15:53:06.250779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.248 15:53:06 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:32.248 15:53:06 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:17:32.248 15:53:06 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:32.248 [2024-07-20 15:53:06.992202] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:32.248 [2024-07-20 15:53:06.992260] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:32.507 [2024-07-20 15:53:07.159012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.507 [2024-07-20 15:53:07.159062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:32.507 [2024-07-20 15:53:07.159081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:32.507 [2024-07-20 15:53:07.159091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.507 [2024-07-20 15:53:07.161443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.507 [2024-07-20 15:53:07.161481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.507 [2024-07-20 15:53:07.161499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.334 ms 00:17:32.507 [2024-07-20 15:53:07.161510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.507 [2024-07-20 15:53:07.161596] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:32.507 [2024-07-20 15:53:07.161820] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:32.507 [2024-07-20 15:53:07.161841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.507 [2024-07-20 15:53:07.161851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.507 [2024-07-20 15:53:07.161865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:17:32.507 [2024-07-20 15:53:07.161875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.507 [2024-07-20 15:53:07.163541] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:32.507 [2024-07-20 15:53:07.166212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.507 [2024-07-20 15:53:07.166403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:32.507 [2024-07-20 15:53:07.166559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.681 ms 00:17:32.507 [2024-07-20 15:53:07.166607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.507 [2024-07-20 15:53:07.166697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.507 [2024-07-20 15:53:07.166858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:32.507 [2024-07-20 15:53:07.166915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:32.507 [2024-07-20 15:53:07.166952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.507 [2024-07-20 15:53:07.173647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.507 [2024-07-20 15:53:07.173807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.507 [2024-07-20 15:53:07.173940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.632 ms 00:17:32.507 [2024-07-20 15:53:07.173982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.507 [2024-07-20 15:53:07.174110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.507 [2024-07-20 15:53:07.174129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.507 [2024-07-20 15:53:07.174149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:32.507 [2024-07-20 15:53:07.174164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.507 [2024-07-20 15:53:07.174197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.508 [2024-07-20 15:53:07.174219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:32.508 [2024-07-20 15:53:07.174230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:32.508 [2024-07-20 15:53:07.174250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.508 [2024-07-20 15:53:07.174289] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:32.508 [2024-07-20 15:53:07.175946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.508 [2024-07-20 15:53:07.175976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.508 [2024-07-20 15:53:07.175993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:17:32.508 [2024-07-20 15:53:07.176007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.508 [2024-07-20 15:53:07.176055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.508 [2024-07-20 15:53:07.176068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:32.508 [2024-07-20 15:53:07.176081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:32.508 [2024-07-20 15:53:07.176092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.508 [2024-07-20 15:53:07.176117] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:32.508 [2024-07-20 15:53:07.176140] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:32.508 [2024-07-20 15:53:07.176187] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:32.508 [2024-07-20 15:53:07.176215] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:32.508 [2024-07-20 15:53:07.176306] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:32.508 [2024-07-20 15:53:07.176321] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:32.508 [2024-07-20 15:53:07.176340] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:32.508 [2024-07-20 15:53:07.176370] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:32.508 [2024-07-20 15:53:07.176386] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:32.508 [2024-07-20 15:53:07.176398] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:32.508 [2024-07-20 15:53:07.176414] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:32.508 [2024-07-20 15:53:07.176424] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:32.508 [2024-07-20 15:53:07.176437] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:32.508 [2024-07-20 15:53:07.176451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.508 [2024-07-20 15:53:07.176464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:32.508 [2024-07-20 15:53:07.176475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:17:32.508 [2024-07-20 15:53:07.176488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.508 [2024-07-20 15:53:07.176563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.508 [2024-07-20 15:53:07.176577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:32.508 [2024-07-20 15:53:07.176588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:32.508 [2024-07-20 15:53:07.176601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.508 [2024-07-20 15:53:07.176689] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:32.508 [2024-07-20 15:53:07.176708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:32.508 [2024-07-20 15:53:07.176719] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:32.508 [2024-07-20 15:53:07.176732] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.508 [2024-07-20 15:53:07.176744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:32.508 [2024-07-20 15:53:07.176760] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:32.508 [2024-07-20 15:53:07.176770] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:32.508 [2024-07-20 15:53:07.176783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:32.508 [2024-07-20 15:53:07.176793] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:32.508 [2024-07-20 15:53:07.176805] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:32.508 [2024-07-20 15:53:07.176815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:32.508 [2024-07-20 15:53:07.176828] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:32.508 [2024-07-20 15:53:07.176837] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:32.508 [2024-07-20 15:53:07.176849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:32.508 [2024-07-20 15:53:07.176860] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:32.508 [2024-07-20 15:53:07.176872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.508 [2024-07-20 15:53:07.176882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:32.508 [2024-07-20 15:53:07.176895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:32.508 [2024-07-20 15:53:07.176904] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.508 [2024-07-20 15:53:07.176916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:32.508 [2024-07-20 15:53:07.176926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:32.508 [2024-07-20 15:53:07.176941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.508 [2024-07-20 15:53:07.176951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:32.508 [2024-07-20 15:53:07.176963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:32.508 [2024-07-20 15:53:07.176973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.508 [2024-07-20 15:53:07.176984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:32.508 [2024-07-20 15:53:07.176994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:32.508 [2024-07-20 15:53:07.177007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.508 [2024-07-20 15:53:07.177017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:32.508 [2024-07-20 15:53:07.177030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:32.508 [2024-07-20 15:53:07.177039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:32.508 [2024-07-20 15:53:07.177053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:32.508 [2024-07-20 15:53:07.177063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:32.508 [2024-07-20 15:53:07.177075] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:32.508 [2024-07-20 15:53:07.177084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:32.508 [2024-07-20 15:53:07.177097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:32.508 [2024-07-20 15:53:07.177106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:32.508 [2024-07-20 15:53:07.177121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:32.508 [2024-07-20 15:53:07.177131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:32.508 [2024-07-20 15:53:07.177142] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.508 [2024-07-20 15:53:07.177152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:32.508 [2024-07-20 15:53:07.177164] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:32.508 [2024-07-20 15:53:07.177174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.508 [2024-07-20 15:53:07.177185] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:32.508 [2024-07-20 15:53:07.177196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:32.508 [2024-07-20 15:53:07.177209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:32.508 [2024-07-20 15:53:07.177219] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:32.508 [2024-07-20 15:53:07.177233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:32.508 [2024-07-20 15:53:07.177243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:32.508 [2024-07-20 15:53:07.177255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:32.508 [2024-07-20 15:53:07.177266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:32.508 [2024-07-20 15:53:07.177278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:32.508 [2024-07-20 15:53:07.177287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:32.508 [2024-07-20 15:53:07.177303] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:32.508 [2024-07-20 15:53:07.177316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:32.508 [2024-07-20 15:53:07.177340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:32.508 [2024-07-20 15:53:07.177352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:32.508 [2024-07-20 15:53:07.177377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:32.508 [2024-07-20 15:53:07.177388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:32.508 [2024-07-20 15:53:07.177402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:32.508 [2024-07-20 15:53:07.177412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:32.508 [2024-07-20 15:53:07.177426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:32.508 [2024-07-20 15:53:07.177437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:32.508 [2024-07-20 15:53:07.177450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:32.508 [2024-07-20 15:53:07.177462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:32.508 [2024-07-20 15:53:07.177475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:32.508 [2024-07-20 15:53:07.177486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:32.508 [2024-07-20 15:53:07.177499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:32.508 [2024-07-20 15:53:07.177510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:32.508 [2024-07-20 15:53:07.177526] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:32.508 [2024-07-20 15:53:07.177537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:32.509 [2024-07-20 15:53:07.177561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:32.509 [2024-07-20 15:53:07.177572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:32.509 [2024-07-20 15:53:07.177585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:32.509 [2024-07-20 15:53:07.177596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:32.509 [2024-07-20 15:53:07.177610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.177621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:32.509 [2024-07-20 15:53:07.177634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:17:32.509 [2024-07-20 15:53:07.177658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.189593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.189627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.509 [2024-07-20 15:53:07.189657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.888 ms 00:17:32.509 [2024-07-20 15:53:07.189668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.189782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.189798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:32.509 [2024-07-20 15:53:07.189814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:32.509 [2024-07-20 15:53:07.189830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.200788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.200829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.509 [2024-07-20 15:53:07.200846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.950 ms 00:17:32.509 [2024-07-20 15:53:07.200857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.200927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.200938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.509 [2024-07-20 15:53:07.200952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:32.509 [2024-07-20 15:53:07.200963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.201417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.201432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.509 [2024-07-20 15:53:07.201445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:17:32.509 [2024-07-20 15:53:07.201456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.201575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.201591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.509 [2024-07-20 15:53:07.201607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:32.509 [2024-07-20 15:53:07.201617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.208819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.208851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.509 [2024-07-20 15:53:07.208866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.186 ms 00:17:32.509 [2024-07-20 15:53:07.208878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.211570] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:32.509 [2024-07-20 15:53:07.211604] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:32.509 [2024-07-20 15:53:07.211621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.211632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:32.509 [2024-07-20 15:53:07.211646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.645 ms 00:17:32.509 [2024-07-20 15:53:07.211656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.224052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.224087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:32.509 [2024-07-20 15:53:07.224111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.335 ms 00:17:32.509 [2024-07-20 15:53:07.224121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.225984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.226016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:32.509 [2024-07-20 15:53:07.226031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.801 ms 00:17:32.509 [2024-07-20 15:53:07.226041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.227610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.227640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:32.509 [2024-07-20 15:53:07.227655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:17:32.509 [2024-07-20 15:53:07.227665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.227954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.227979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:32.509 [2024-07-20 15:53:07.227994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:17:32.509 [2024-07-20 15:53:07.228004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.269127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.269207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:32.509 [2024-07-20 15:53:07.269248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.155 ms 00:17:32.509 [2024-07-20 15:53:07.269263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.278097] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:32.509 [2024-07-20 15:53:07.293642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.293685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:32.509 [2024-07-20 15:53:07.293700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.285 ms 00:17:32.509 [2024-07-20 15:53:07.293729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.293811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.293827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:32.509 [2024-07-20 15:53:07.293840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:32.509 [2024-07-20 15:53:07.293856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.293913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.293927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:32.509 [2024-07-20 15:53:07.293938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:32.509 [2024-07-20 15:53:07.293968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.293994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.294009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:32.509 [2024-07-20 15:53:07.294020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:32.509 [2024-07-20 15:53:07.294036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.294072] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:32.509 [2024-07-20 15:53:07.294093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.294103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:32.509 [2024-07-20 15:53:07.294116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:32.509 [2024-07-20 15:53:07.294127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.297784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.297819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:32.509 [2024-07-20 15:53:07.297835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.633 ms 00:17:32.509 [2024-07-20 15:53:07.297846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.297939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.509 [2024-07-20 15:53:07.297952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:32.509 [2024-07-20 15:53:07.297966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:32.509 [2024-07-20 15:53:07.297976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.509 [2024-07-20 15:53:07.298954] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:32.509 [2024-07-20 15:53:07.299894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 139.880 ms, result 0 00:17:32.768 [2024-07-20 15:53:07.300852] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:32.768 Some configs were skipped because the RPC state that can call them passed over. 00:17:32.768 15:53:07 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:32.768 [2024-07-20 15:53:07.500133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.768 [2024-07-20 15:53:07.500189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:32.768 [2024-07-20 15:53:07.500205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:17:32.768 [2024-07-20 15:53:07.500219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.768 [2024-07-20 15:53:07.500255] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.733 ms, result 0 00:17:32.768 true 00:17:32.768 15:53:07 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:33.026 [2024-07-20 15:53:07.688955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.026 [2024-07-20 15:53:07.689004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:33.026 [2024-07-20 15:53:07.689023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.404 ms 00:17:33.026 [2024-07-20 15:53:07.689033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.026 [2024-07-20 15:53:07.689072] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.545 ms, result 0 00:17:33.026 true 00:17:33.026 15:53:07 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 89194 00:17:33.026 15:53:07 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89194 ']' 00:17:33.026 15:53:07 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89194 00:17:33.026 15:53:07 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:17:33.026 15:53:07 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:33.026 15:53:07 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89194 00:17:33.026 killing process with pid 89194 00:17:33.026 15:53:07 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:33.026 15:53:07 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:33.026 15:53:07 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89194' 00:17:33.026 15:53:07 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 89194 00:17:33.026 15:53:07 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 89194 00:17:33.286 [2024-07-20 15:53:07.879841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.286 [2024-07-20 15:53:07.879905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:33.286 [2024-07-20 15:53:07.879943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:33.286 [2024-07-20 15:53:07.879963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.286 [2024-07-20 15:53:07.879989] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:33.286 [2024-07-20 15:53:07.880658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.286 [2024-07-20 15:53:07.880673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:33.286 [2024-07-20 15:53:07.880686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:17:33.286 [2024-07-20 15:53:07.880698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.286 [2024-07-20 15:53:07.880932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.286 [2024-07-20 15:53:07.880944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:33.286 [2024-07-20 15:53:07.880956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:17:33.286 [2024-07-20 15:53:07.880965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.286 [2024-07-20 15:53:07.884526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.286 [2024-07-20 15:53:07.884561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:33.286 [2024-07-20 15:53:07.884585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.543 ms 00:17:33.286 [2024-07-20 15:53:07.884595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.286 [2024-07-20 15:53:07.890100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.286 [2024-07-20 15:53:07.890138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:33.286 [2024-07-20 15:53:07.890152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.466 ms 00:17:33.286 [2024-07-20 15:53:07.890161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.286 [2024-07-20 15:53:07.891681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.286 [2024-07-20 15:53:07.891717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:33.286 [2024-07-20 15:53:07.891732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:17:33.286 [2024-07-20 15:53:07.891741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.286 [2024-07-20 15:53:07.895687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.286 [2024-07-20 15:53:07.895722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:33.286 [2024-07-20 15:53:07.895736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.911 ms 00:17:33.286 [2024-07-20 15:53:07.895746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.286 [2024-07-20 15:53:07.895852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.286 [2024-07-20 15:53:07.895863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:33.286 [2024-07-20 15:53:07.895876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:33.286 [2024-07-20 15:53:07.895886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.286 [2024-07-20 15:53:07.897939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.286 [2024-07-20 15:53:07.897972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:33.286 [2024-07-20 15:53:07.897986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.032 ms 00:17:33.286 [2024-07-20 15:53:07.897996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.286 [2024-07-20 15:53:07.899594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.286 [2024-07-20 15:53:07.899628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:33.286 [2024-07-20 15:53:07.899643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:17:33.286 [2024-07-20 15:53:07.899652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.286 [2024-07-20 15:53:07.900793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.286 [2024-07-20 15:53:07.900827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:33.286 [2024-07-20 15:53:07.900841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:17:33.286 [2024-07-20 15:53:07.900851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.286 [2024-07-20 15:53:07.901956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.286 [2024-07-20 15:53:07.901990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:33.286 [2024-07-20 15:53:07.902004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:17:33.286 [2024-07-20 15:53:07.902014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.286 [2024-07-20 15:53:07.902049] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:33.286 [2024-07-20 15:53:07.902066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:33.286 [2024-07-20 15:53:07.902081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:33.286 [2024-07-20 15:53:07.902092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:33.286 [2024-07-20 15:53:07.902108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:33.286 [2024-07-20 15:53:07.902119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:33.286 [2024-07-20 15:53:07.902133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:33.286 [2024-07-20 15:53:07.902144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:33.286 [2024-07-20 15:53:07.902157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:33.286 [2024-07-20 15:53:07.902167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:33.286 [2024-07-20 15:53:07.902180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:33.286 [2024-07-20 15:53:07.902191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:33.286 [2024-07-20 15:53:07.902205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:33.286 [2024-07-20 15:53:07.902216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:33.286 [2024-07-20 15:53:07.902231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:33.286 [2024-07-20 15:53:07.902242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.902994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:33.287 [2024-07-20 15:53:07.903343] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:33.287 [2024-07-20 15:53:07.903607] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f98535-0125-4c1a-b8ee-497ceb063ba4 00:17:33.287 [2024-07-20 15:53:07.903677] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:33.287 [2024-07-20 15:53:07.903712] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:33.287 [2024-07-20 15:53:07.903746] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:33.287 [2024-07-20 15:53:07.903779] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:33.287 [2024-07-20 15:53:07.903810] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:33.287 [2024-07-20 15:53:07.903893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:33.287 [2024-07-20 15:53:07.903928] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:33.287 [2024-07-20 15:53:07.903961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:33.287 [2024-07-20 15:53:07.903990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:33.287 [2024-07-20 15:53:07.904025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.287 [2024-07-20 15:53:07.904056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:33.287 [2024-07-20 15:53:07.904197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.981 ms 00:17:33.288 [2024-07-20 15:53:07.904230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.906011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.288 [2024-07-20 15:53:07.906130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:33.288 [2024-07-20 15:53:07.906228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.709 ms 00:17:33.288 [2024-07-20 15:53:07.906264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.906425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.288 [2024-07-20 15:53:07.906572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:33.288 [2024-07-20 15:53:07.906634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:33.288 [2024-07-20 15:53:07.906664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.913788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.288 [2024-07-20 15:53:07.913905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:33.288 [2024-07-20 15:53:07.914076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.288 [2024-07-20 15:53:07.914113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.914226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.288 [2024-07-20 15:53:07.914263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:33.288 [2024-07-20 15:53:07.914380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.288 [2024-07-20 15:53:07.914420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.914513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.288 [2024-07-20 15:53:07.914553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:33.288 [2024-07-20 15:53:07.914682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.288 [2024-07-20 15:53:07.914694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.914719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.288 [2024-07-20 15:53:07.914729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:33.288 [2024-07-20 15:53:07.914742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.288 [2024-07-20 15:53:07.914756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.926574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.288 [2024-07-20 15:53:07.926616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:33.288 [2024-07-20 15:53:07.926632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.288 [2024-07-20 15:53:07.926658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.934967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.288 [2024-07-20 15:53:07.935001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:33.288 [2024-07-20 15:53:07.935017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.288 [2024-07-20 15:53:07.935043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.935097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.288 [2024-07-20 15:53:07.935109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:33.288 [2024-07-20 15:53:07.935125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.288 [2024-07-20 15:53:07.935136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.935171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.288 [2024-07-20 15:53:07.935183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:33.288 [2024-07-20 15:53:07.935195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.288 [2024-07-20 15:53:07.935215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.935295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.288 [2024-07-20 15:53:07.935308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:33.288 [2024-07-20 15:53:07.935328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.288 [2024-07-20 15:53:07.935351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.935413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.288 [2024-07-20 15:53:07.935443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:33.288 [2024-07-20 15:53:07.935456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.288 [2024-07-20 15:53:07.935466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.935520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.288 [2024-07-20 15:53:07.935532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:33.288 [2024-07-20 15:53:07.935546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.288 [2024-07-20 15:53:07.935558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.935613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.288 [2024-07-20 15:53:07.935626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:33.288 [2024-07-20 15:53:07.935639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.288 [2024-07-20 15:53:07.935649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.288 [2024-07-20 15:53:07.935802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.011 ms, result 0 00:17:33.547 15:53:08 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:33.547 [2024-07-20 15:53:08.259568] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:33.547 [2024-07-20 15:53:08.259694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89230 ] 00:17:33.805 [2024-07-20 15:53:08.409815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.805 [2024-07-20 15:53:08.451919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.805 [2024-07-20 15:53:08.552815] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:33.805 [2024-07-20 15:53:08.552880] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:34.065 [2024-07-20 15:53:08.703809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.065 [2024-07-20 15:53:08.703856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:34.065 [2024-07-20 15:53:08.703872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:34.065 [2024-07-20 15:53:08.703898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.065 [2024-07-20 15:53:08.706293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.065 [2024-07-20 15:53:08.706330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.065 [2024-07-20 15:53:08.706343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.378 ms 00:17:34.065 [2024-07-20 15:53:08.706362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.065 [2024-07-20 15:53:08.706448] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:34.065 [2024-07-20 15:53:08.706658] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:34.065 [2024-07-20 15:53:08.706674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.065 [2024-07-20 15:53:08.706684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.066 [2024-07-20 15:53:08.706698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:17:34.066 [2024-07-20 15:53:08.706708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.066 [2024-07-20 15:53:08.708194] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:34.066 [2024-07-20 15:53:08.710621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.066 [2024-07-20 15:53:08.710654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:34.066 [2024-07-20 15:53:08.710667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.432 ms 00:17:34.066 [2024-07-20 15:53:08.710677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.066 [2024-07-20 15:53:08.710743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.066 [2024-07-20 15:53:08.710756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:34.066 [2024-07-20 15:53:08.710768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:34.066 [2024-07-20 15:53:08.710781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.066 [2024-07-20 15:53:08.717614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.066 [2024-07-20 15:53:08.717744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.066 [2024-07-20 15:53:08.717901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.800 ms 00:17:34.066 [2024-07-20 15:53:08.717939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.066 [2024-07-20 15:53:08.718093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.066 [2024-07-20 15:53:08.718241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.066 [2024-07-20 15:53:08.718341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:34.066 [2024-07-20 15:53:08.718399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.066 [2024-07-20 15:53:08.718460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.066 [2024-07-20 15:53:08.718498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:34.066 [2024-07-20 15:53:08.718529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:34.066 [2024-07-20 15:53:08.718634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.066 [2024-07-20 15:53:08.718673] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:34.066 [2024-07-20 15:53:08.720419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.066 [2024-07-20 15:53:08.720533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.066 [2024-07-20 15:53:08.720618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:17:34.066 [2024-07-20 15:53:08.720653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.066 [2024-07-20 15:53:08.720728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.066 [2024-07-20 15:53:08.720817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:34.066 [2024-07-20 15:53:08.720890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:34.066 [2024-07-20 15:53:08.720921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.066 [2024-07-20 15:53:08.720965] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:34.066 [2024-07-20 15:53:08.721012] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:34.066 [2024-07-20 15:53:08.721098] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:34.066 [2024-07-20 15:53:08.721236] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:34.066 [2024-07-20 15:53:08.721384] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:34.066 [2024-07-20 15:53:08.721506] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:34.066 [2024-07-20 15:53:08.721526] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:34.066 [2024-07-20 15:53:08.721540] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:34.066 [2024-07-20 15:53:08.721562] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:34.066 [2024-07-20 15:53:08.721574] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:34.066 [2024-07-20 15:53:08.721585] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:34.066 [2024-07-20 15:53:08.721603] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:34.066 [2024-07-20 15:53:08.721617] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:34.066 [2024-07-20 15:53:08.721629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.066 [2024-07-20 15:53:08.721640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:34.066 [2024-07-20 15:53:08.721651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:17:34.066 [2024-07-20 15:53:08.721662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.066 [2024-07-20 15:53:08.721748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.066 [2024-07-20 15:53:08.721761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:34.066 [2024-07-20 15:53:08.721772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:34.066 [2024-07-20 15:53:08.721782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.066 [2024-07-20 15:53:08.721871] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:34.066 [2024-07-20 15:53:08.721885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:34.066 [2024-07-20 15:53:08.721896] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.066 [2024-07-20 15:53:08.721906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.066 [2024-07-20 15:53:08.721917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:34.066 [2024-07-20 15:53:08.721927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:34.066 [2024-07-20 15:53:08.721937] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:34.066 [2024-07-20 15:53:08.721946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:34.066 [2024-07-20 15:53:08.721957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:34.066 [2024-07-20 15:53:08.721966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.066 [2024-07-20 15:53:08.721976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:34.066 [2024-07-20 15:53:08.721986] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:34.066 [2024-07-20 15:53:08.721998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.066 [2024-07-20 15:53:08.722008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:34.066 [2024-07-20 15:53:08.722018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:34.066 [2024-07-20 15:53:08.722027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.066 [2024-07-20 15:53:08.722037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:34.066 [2024-07-20 15:53:08.722046] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:34.066 [2024-07-20 15:53:08.722056] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.066 [2024-07-20 15:53:08.722066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:34.066 [2024-07-20 15:53:08.722077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:34.066 [2024-07-20 15:53:08.722087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.066 [2024-07-20 15:53:08.722096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:34.066 [2024-07-20 15:53:08.722106] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:34.066 [2024-07-20 15:53:08.722116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.066 [2024-07-20 15:53:08.722125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:34.066 [2024-07-20 15:53:08.722135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:34.066 [2024-07-20 15:53:08.722144] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.066 [2024-07-20 15:53:08.722161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:34.066 [2024-07-20 15:53:08.722171] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:34.066 [2024-07-20 15:53:08.722181] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:34.066 [2024-07-20 15:53:08.722191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:34.066 [2024-07-20 15:53:08.722201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:34.066 [2024-07-20 15:53:08.722210] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.066 [2024-07-20 15:53:08.722219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:34.066 [2024-07-20 15:53:08.722229] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:34.066 [2024-07-20 15:53:08.722238] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.066 [2024-07-20 15:53:08.722248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:34.066 [2024-07-20 15:53:08.722257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:34.066 [2024-07-20 15:53:08.722277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.066 [2024-07-20 15:53:08.722286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:34.066 [2024-07-20 15:53:08.722296] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:34.066 [2024-07-20 15:53:08.722306] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.066 [2024-07-20 15:53:08.722315] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:34.066 [2024-07-20 15:53:08.722328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:34.066 [2024-07-20 15:53:08.722346] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.066 [2024-07-20 15:53:08.722369] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.066 [2024-07-20 15:53:08.722380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:34.066 [2024-07-20 15:53:08.722391] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:34.066 [2024-07-20 15:53:08.722400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:34.066 [2024-07-20 15:53:08.722410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:34.066 [2024-07-20 15:53:08.722419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:34.066 [2024-07-20 15:53:08.722429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:34.066 [2024-07-20 15:53:08.722440] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:34.066 [2024-07-20 15:53:08.722453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.066 [2024-07-20 15:53:08.722466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:34.066 [2024-07-20 15:53:08.722479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:34.067 [2024-07-20 15:53:08.722490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:34.067 [2024-07-20 15:53:08.722501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:34.067 [2024-07-20 15:53:08.722512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:34.067 [2024-07-20 15:53:08.722526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:34.067 [2024-07-20 15:53:08.722537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:34.067 [2024-07-20 15:53:08.722548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:34.067 [2024-07-20 15:53:08.722559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:34.067 [2024-07-20 15:53:08.722570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:34.067 [2024-07-20 15:53:08.722581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:34.067 [2024-07-20 15:53:08.722591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:34.067 [2024-07-20 15:53:08.722602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:34.067 [2024-07-20 15:53:08.722613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:34.067 [2024-07-20 15:53:08.722624] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:34.067 [2024-07-20 15:53:08.722638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.067 [2024-07-20 15:53:08.722659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:34.067 [2024-07-20 15:53:08.722670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:34.067 [2024-07-20 15:53:08.722681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:34.067 [2024-07-20 15:53:08.722693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:34.067 [2024-07-20 15:53:08.722705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.722719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:34.067 [2024-07-20 15:53:08.722730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:17:34.067 [2024-07-20 15:53:08.722740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.745074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.745414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:34.067 [2024-07-20 15:53:08.745623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.295 ms 00:17:34.067 [2024-07-20 15:53:08.745841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.746318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.746652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:34.067 [2024-07-20 15:53:08.746930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:17:34.067 [2024-07-20 15:53:08.747072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.763862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.764103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:34.067 [2024-07-20 15:53:08.764317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.652 ms 00:17:34.067 [2024-07-20 15:53:08.764439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.764680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.764769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:34.067 [2024-07-20 15:53:08.764840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:34.067 [2024-07-20 15:53:08.764956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.765719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.765883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:34.067 [2024-07-20 15:53:08.766036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:17:34.067 [2024-07-20 15:53:08.766065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.766324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.766351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:34.067 [2024-07-20 15:53:08.766396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:17:34.067 [2024-07-20 15:53:08.766411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.773544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.773585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:34.067 [2024-07-20 15:53:08.773614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.112 ms 00:17:34.067 [2024-07-20 15:53:08.773629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.776521] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:34.067 [2024-07-20 15:53:08.776553] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:34.067 [2024-07-20 15:53:08.776571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.776582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:34.067 [2024-07-20 15:53:08.776593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.816 ms 00:17:34.067 [2024-07-20 15:53:08.776603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.789215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.789252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:34.067 [2024-07-20 15:53:08.789266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.586 ms 00:17:34.067 [2024-07-20 15:53:08.789283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.791067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.791099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:34.067 [2024-07-20 15:53:08.791111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:17:34.067 [2024-07-20 15:53:08.791121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.792644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.792673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:34.067 [2024-07-20 15:53:08.792684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.482 ms 00:17:34.067 [2024-07-20 15:53:08.792695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.792978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.792999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:34.067 [2024-07-20 15:53:08.793019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:17:34.067 [2024-07-20 15:53:08.793028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.812976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.813029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:34.067 [2024-07-20 15:53:08.813047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.952 ms 00:17:34.067 [2024-07-20 15:53:08.813058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.819297] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:34.067 [2024-07-20 15:53:08.835237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.835285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:34.067 [2024-07-20 15:53:08.835301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.141 ms 00:17:34.067 [2024-07-20 15:53:08.835326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.835426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.835440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:34.067 [2024-07-20 15:53:08.835456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:34.067 [2024-07-20 15:53:08.835466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.835530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.835557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:34.067 [2024-07-20 15:53:08.835568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:34.067 [2024-07-20 15:53:08.835586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.835623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.835634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:34.067 [2024-07-20 15:53:08.835654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:34.067 [2024-07-20 15:53:08.835668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.835710] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:34.067 [2024-07-20 15:53:08.835721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.835732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:34.067 [2024-07-20 15:53:08.835742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:34.067 [2024-07-20 15:53:08.835752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.839458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.839506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:34.067 [2024-07-20 15:53:08.839519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.691 ms 00:17:34.067 [2024-07-20 15:53:08.839535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.839622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.067 [2024-07-20 15:53:08.839635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:34.067 [2024-07-20 15:53:08.839647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:34.067 [2024-07-20 15:53:08.839657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.067 [2024-07-20 15:53:08.840601] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:34.067 [2024-07-20 15:53:08.841569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.735 ms, result 0 00:17:34.067 [2024-07-20 15:53:08.842222] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:34.067 [2024-07-20 15:53:08.852000] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.628  Copying: 28/256 [MB] (28 MBps) Copying: 53/256 [MB] (24 MBps) Copying: 79/256 [MB] (25 MBps) Copying: 104/256 [MB] (25 MBps) Copying: 129/256 [MB] (25 MBps) Copying: 154/256 [MB] (25 MBps) Copying: 180/256 [MB] (25 MBps) Copying: 205/256 [MB] (24 MBps) Copying: 230/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-20 15:53:19.331610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:44.628 [2024-07-20 15:53:19.333137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.628 [2024-07-20 15:53:19.333183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:44.628 [2024-07-20 15:53:19.333204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.628 [2024-07-20 15:53:19.333219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.628 [2024-07-20 15:53:19.333250] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:44.628 [2024-07-20 15:53:19.333975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.628 [2024-07-20 15:53:19.334017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:44.628 [2024-07-20 15:53:19.334034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:17:44.628 [2024-07-20 15:53:19.334048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.628 [2024-07-20 15:53:19.334390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.628 [2024-07-20 15:53:19.334420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:44.628 [2024-07-20 15:53:19.334440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:17:44.628 [2024-07-20 15:53:19.334454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.628 [2024-07-20 15:53:19.338827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.628 [2024-07-20 15:53:19.338862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:44.628 [2024-07-20 15:53:19.338878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.355 ms 00:17:44.628 [2024-07-20 15:53:19.338892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.628 [2024-07-20 15:53:19.345584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.628 [2024-07-20 15:53:19.345644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:44.628 [2024-07-20 15:53:19.345657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.673 ms 00:17:44.628 [2024-07-20 15:53:19.345688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.628 [2024-07-20 15:53:19.347124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.628 [2024-07-20 15:53:19.347166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:44.628 [2024-07-20 15:53:19.347179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:17:44.628 [2024-07-20 15:53:19.347189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.628 [2024-07-20 15:53:19.350860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.628 [2024-07-20 15:53:19.350899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:44.628 [2024-07-20 15:53:19.350912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.642 ms 00:17:44.628 [2024-07-20 15:53:19.350923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.628 [2024-07-20 15:53:19.351049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.628 [2024-07-20 15:53:19.351062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:44.628 [2024-07-20 15:53:19.351084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:44.628 [2024-07-20 15:53:19.351093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.628 [2024-07-20 15:53:19.353234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.628 [2024-07-20 15:53:19.353270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:44.628 [2024-07-20 15:53:19.353283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.125 ms 00:17:44.628 [2024-07-20 15:53:19.353293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.628 [2024-07-20 15:53:19.354821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.628 [2024-07-20 15:53:19.354855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:44.628 [2024-07-20 15:53:19.354867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.499 ms 00:17:44.628 [2024-07-20 15:53:19.354877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.628 [2024-07-20 15:53:19.355949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.628 [2024-07-20 15:53:19.355982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:44.628 [2024-07-20 15:53:19.355994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:17:44.628 [2024-07-20 15:53:19.356003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.628 [2024-07-20 15:53:19.357211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.628 [2024-07-20 15:53:19.357344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:44.628 [2024-07-20 15:53:19.357444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:17:44.628 [2024-07-20 15:53:19.357481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.628 [2024-07-20 15:53:19.357841] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:44.628 [2024-07-20 15:53:19.357861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.357885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.357896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.357907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.357918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.357929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.357940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.357951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.357961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.357972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.357982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.357993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:44.628 [2024-07-20 15:53:19.358663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.358990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.359000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:44.629 [2024-07-20 15:53:19.359018] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:44.629 [2024-07-20 15:53:19.359029] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f98535-0125-4c1a-b8ee-497ceb063ba4 00:17:44.629 [2024-07-20 15:53:19.359039] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:44.629 [2024-07-20 15:53:19.359049] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:44.629 [2024-07-20 15:53:19.359068] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:44.629 [2024-07-20 15:53:19.359085] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:44.629 [2024-07-20 15:53:19.359095] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:44.629 [2024-07-20 15:53:19.359109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:44.629 [2024-07-20 15:53:19.359126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:44.629 [2024-07-20 15:53:19.359135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:44.629 [2024-07-20 15:53:19.359143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:44.629 [2024-07-20 15:53:19.359153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.629 [2024-07-20 15:53:19.359163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:44.629 [2024-07-20 15:53:19.359181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:17:44.629 [2024-07-20 15:53:19.359194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.360904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.629 [2024-07-20 15:53:19.360924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:44.629 [2024-07-20 15:53:19.360936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.685 ms 00:17:44.629 [2024-07-20 15:53:19.360951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.361061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.629 [2024-07-20 15:53:19.361074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:44.629 [2024-07-20 15:53:19.361085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:44.629 [2024-07-20 15:53:19.361095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.367526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.629 [2024-07-20 15:53:19.367549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.629 [2024-07-20 15:53:19.367565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.629 [2024-07-20 15:53:19.367574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.367642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.629 [2024-07-20 15:53:19.367661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.629 [2024-07-20 15:53:19.367678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.629 [2024-07-20 15:53:19.367688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.367739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.629 [2024-07-20 15:53:19.367758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.629 [2024-07-20 15:53:19.367777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.629 [2024-07-20 15:53:19.367787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.367809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.629 [2024-07-20 15:53:19.367820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.629 [2024-07-20 15:53:19.367830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.629 [2024-07-20 15:53:19.367839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.380613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.629 [2024-07-20 15:53:19.380657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.629 [2024-07-20 15:53:19.380670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.629 [2024-07-20 15:53:19.380684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.389120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.629 [2024-07-20 15:53:19.389159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.629 [2024-07-20 15:53:19.389171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.629 [2024-07-20 15:53:19.389196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.389227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.629 [2024-07-20 15:53:19.389237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.629 [2024-07-20 15:53:19.389248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.629 [2024-07-20 15:53:19.389258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.389292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.629 [2024-07-20 15:53:19.389303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.629 [2024-07-20 15:53:19.389313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.629 [2024-07-20 15:53:19.389330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.389417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.629 [2024-07-20 15:53:19.389431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.629 [2024-07-20 15:53:19.389441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.629 [2024-07-20 15:53:19.389451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.389485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.629 [2024-07-20 15:53:19.389501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:44.629 [2024-07-20 15:53:19.389512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.629 [2024-07-20 15:53:19.389521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.389575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.629 [2024-07-20 15:53:19.389587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.629 [2024-07-20 15:53:19.389597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.629 [2024-07-20 15:53:19.389607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.389667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.629 [2024-07-20 15:53:19.389680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.629 [2024-07-20 15:53:19.389691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.629 [2024-07-20 15:53:19.389710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.629 [2024-07-20 15:53:19.389845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.779 ms, result 0 00:17:44.887 00:17:44.887 00:17:44.887 15:53:19 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:45.454 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:45.454 15:53:20 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:45.454 15:53:20 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:45.454 15:53:20 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:45.454 15:53:20 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:45.454 15:53:20 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:45.454 15:53:20 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:45.454 15:53:20 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 89194 00:17:45.454 Process with pid 89194 is not found 00:17:45.454 15:53:20 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89194 ']' 00:17:45.454 15:53:20 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89194 00:17:45.454 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (89194) - No such process 00:17:45.454 15:53:20 ftl.ftl_trim -- common/autotest_common.sh@973 -- # echo 'Process with pid 89194 is not found' 00:17:45.455 ************************************ 00:17:45.455 END TEST ftl_trim 00:17:45.455 ************************************ 00:17:45.455 00:17:45.455 real 0m53.163s 00:17:45.455 user 1m12.585s 00:17:45.455 sys 0m5.788s 00:17:45.455 15:53:20 ftl.ftl_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:45.455 15:53:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:45.455 15:53:20 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:45.455 15:53:20 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:45.455 15:53:20 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:45.455 15:53:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:45.713 ************************************ 00:17:45.713 START TEST ftl_restore 00:17:45.713 ************************************ 00:17:45.713 15:53:20 ftl.ftl_restore -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:45.713 * Looking for test storage... 00:17:45.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.713 15:53:20 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:45.713 15:53:20 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:45.713 15:53:20 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.713 15:53:20 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:45.713 15:53:20 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.HvdyhR8xiS 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=89414 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:45.714 15:53:20 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 89414 00:17:45.714 15:53:20 ftl.ftl_restore -- common/autotest_common.sh@827 -- # '[' -z 89414 ']' 00:17:45.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.714 15:53:20 ftl.ftl_restore -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.714 15:53:20 ftl.ftl_restore -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:45.714 15:53:20 ftl.ftl_restore -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.714 15:53:20 ftl.ftl_restore -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:45.714 15:53:20 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:45.973 [2024-07-20 15:53:20.528515] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:45.973 [2024-07-20 15:53:20.528844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89414 ] 00:17:45.973 [2024-07-20 15:53:20.679624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.973 [2024-07-20 15:53:20.720448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.541 15:53:21 ftl.ftl_restore -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:46.541 15:53:21 ftl.ftl_restore -- common/autotest_common.sh@860 -- # return 0 00:17:46.541 15:53:21 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:46.541 15:53:21 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:46.541 15:53:21 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:46.541 15:53:21 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:46.541 15:53:21 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:46.541 15:53:21 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:46.800 15:53:21 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:46.800 15:53:21 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:46.800 15:53:21 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:46.800 15:53:21 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:17:46.800 15:53:21 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:46.800 15:53:21 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:46.800 15:53:21 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:46.800 15:53:21 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:47.059 15:53:21 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:47.059 { 00:17:47.059 "name": "nvme0n1", 00:17:47.059 "aliases": [ 00:17:47.059 "642885de-9797-4f2c-b83b-050390f675d2" 00:17:47.059 ], 00:17:47.059 "product_name": "NVMe disk", 00:17:47.059 "block_size": 4096, 00:17:47.059 "num_blocks": 1310720, 00:17:47.059 "uuid": "642885de-9797-4f2c-b83b-050390f675d2", 00:17:47.059 "assigned_rate_limits": { 00:17:47.059 "rw_ios_per_sec": 0, 00:17:47.059 "rw_mbytes_per_sec": 0, 00:17:47.059 "r_mbytes_per_sec": 0, 00:17:47.059 "w_mbytes_per_sec": 0 00:17:47.059 }, 00:17:47.059 "claimed": true, 00:17:47.059 "claim_type": "read_many_write_one", 00:17:47.059 "zoned": false, 00:17:47.059 "supported_io_types": { 00:17:47.059 "read": true, 00:17:47.059 "write": true, 00:17:47.059 "unmap": true, 00:17:47.059 "write_zeroes": true, 00:17:47.059 "flush": true, 00:17:47.059 "reset": true, 00:17:47.059 "compare": true, 00:17:47.059 "compare_and_write": false, 00:17:47.059 "abort": true, 00:17:47.059 "nvme_admin": true, 00:17:47.059 "nvme_io": true 00:17:47.059 }, 00:17:47.059 "driver_specific": { 00:17:47.059 "nvme": [ 00:17:47.059 { 00:17:47.059 "pci_address": "0000:00:11.0", 00:17:47.059 "trid": { 00:17:47.059 "trtype": "PCIe", 00:17:47.059 "traddr": "0000:00:11.0" 00:17:47.059 }, 00:17:47.059 "ctrlr_data": { 00:17:47.059 "cntlid": 0, 00:17:47.059 "vendor_id": "0x1b36", 00:17:47.059 "model_number": "QEMU NVMe Ctrl", 00:17:47.059 "serial_number": "12341", 00:17:47.059 "firmware_revision": "8.0.0", 00:17:47.059 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:47.059 "oacs": { 00:17:47.059 "security": 0, 00:17:47.059 "format": 1, 00:17:47.059 "firmware": 0, 00:17:47.059 "ns_manage": 1 00:17:47.059 }, 00:17:47.059 "multi_ctrlr": false, 00:17:47.059 "ana_reporting": false 00:17:47.059 }, 00:17:47.059 "vs": { 00:17:47.059 "nvme_version": "1.4" 00:17:47.059 }, 00:17:47.059 "ns_data": { 00:17:47.059 "id": 1, 00:17:47.059 "can_share": false 00:17:47.059 } 00:17:47.059 } 00:17:47.059 ], 00:17:47.059 "mp_policy": "active_passive" 00:17:47.059 } 00:17:47.059 } 00:17:47.059 ]' 00:17:47.059 15:53:21 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:47.059 15:53:21 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:47.059 15:53:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:47.059 15:53:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=1310720 00:17:47.059 15:53:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:17:47.059 15:53:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 5120 00:17:47.059 15:53:21 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:47.059 15:53:21 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:47.059 15:53:21 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:47.059 15:53:21 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:47.059 15:53:21 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:47.318 15:53:22 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=77f7c260-4873-4959-882c-5afc0f96e76f 00:17:47.318 15:53:22 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:47.318 15:53:22 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 77f7c260-4873-4959-882c-5afc0f96e76f 00:17:47.577 15:53:22 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:47.835 15:53:22 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=9b54fc42-63e5-4da0-87a7-adbc1eebe43b 00:17:47.835 15:53:22 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9b54fc42-63e5-4da0-87a7-adbc1eebe43b 00:17:47.835 15:53:22 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=57072433-e56d-4800-a05d-25eca1e3a1d2 00:17:47.835 15:53:22 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:47.835 15:53:22 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 57072433-e56d-4800-a05d-25eca1e3a1d2 00:17:47.835 15:53:22 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:47.835 15:53:22 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:47.835 15:53:22 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=57072433-e56d-4800-a05d-25eca1e3a1d2 00:17:47.835 15:53:22 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:47.835 15:53:22 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 57072433-e56d-4800-a05d-25eca1e3a1d2 00:17:47.835 15:53:22 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=57072433-e56d-4800-a05d-25eca1e3a1d2 00:17:47.835 15:53:22 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:47.835 15:53:22 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:47.835 15:53:22 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:47.835 15:53:22 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57072433-e56d-4800-a05d-25eca1e3a1d2 00:17:48.094 15:53:22 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:48.094 { 00:17:48.094 "name": "57072433-e56d-4800-a05d-25eca1e3a1d2", 00:17:48.094 "aliases": [ 00:17:48.094 "lvs/nvme0n1p0" 00:17:48.094 ], 00:17:48.094 "product_name": "Logical Volume", 00:17:48.094 "block_size": 4096, 00:17:48.094 "num_blocks": 26476544, 00:17:48.094 "uuid": "57072433-e56d-4800-a05d-25eca1e3a1d2", 00:17:48.094 "assigned_rate_limits": { 00:17:48.094 "rw_ios_per_sec": 0, 00:17:48.094 "rw_mbytes_per_sec": 0, 00:17:48.094 "r_mbytes_per_sec": 0, 00:17:48.094 "w_mbytes_per_sec": 0 00:17:48.094 }, 00:17:48.094 "claimed": false, 00:17:48.094 "zoned": false, 00:17:48.094 "supported_io_types": { 00:17:48.094 "read": true, 00:17:48.094 "write": true, 00:17:48.094 "unmap": true, 00:17:48.094 "write_zeroes": true, 00:17:48.094 "flush": false, 00:17:48.094 "reset": true, 00:17:48.094 "compare": false, 00:17:48.094 "compare_and_write": false, 00:17:48.094 "abort": false, 00:17:48.094 "nvme_admin": false, 00:17:48.094 "nvme_io": false 00:17:48.094 }, 00:17:48.094 "driver_specific": { 00:17:48.094 "lvol": { 00:17:48.094 "lvol_store_uuid": "9b54fc42-63e5-4da0-87a7-adbc1eebe43b", 00:17:48.094 "base_bdev": "nvme0n1", 00:17:48.094 "thin_provision": true, 00:17:48.094 "num_allocated_clusters": 0, 00:17:48.094 "snapshot": false, 00:17:48.094 "clone": false, 00:17:48.094 "esnap_clone": false 00:17:48.094 } 00:17:48.094 } 00:17:48.094 } 00:17:48.094 ]' 00:17:48.094 15:53:22 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:48.094 15:53:22 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:48.094 15:53:22 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:48.094 15:53:22 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:48.094 15:53:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:48.094 15:53:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:17:48.094 15:53:22 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:48.094 15:53:22 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:48.094 15:53:22 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:48.352 15:53:23 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:48.352 15:53:23 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:48.352 15:53:23 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 57072433-e56d-4800-a05d-25eca1e3a1d2 00:17:48.352 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=57072433-e56d-4800-a05d-25eca1e3a1d2 00:17:48.352 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:48.352 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:48.352 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:48.352 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57072433-e56d-4800-a05d-25eca1e3a1d2 00:17:48.611 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:48.611 { 00:17:48.611 "name": "57072433-e56d-4800-a05d-25eca1e3a1d2", 00:17:48.611 "aliases": [ 00:17:48.611 "lvs/nvme0n1p0" 00:17:48.611 ], 00:17:48.611 "product_name": "Logical Volume", 00:17:48.611 "block_size": 4096, 00:17:48.611 "num_blocks": 26476544, 00:17:48.611 "uuid": "57072433-e56d-4800-a05d-25eca1e3a1d2", 00:17:48.611 "assigned_rate_limits": { 00:17:48.611 "rw_ios_per_sec": 0, 00:17:48.611 "rw_mbytes_per_sec": 0, 00:17:48.611 "r_mbytes_per_sec": 0, 00:17:48.611 "w_mbytes_per_sec": 0 00:17:48.611 }, 00:17:48.611 "claimed": false, 00:17:48.611 "zoned": false, 00:17:48.611 "supported_io_types": { 00:17:48.611 "read": true, 00:17:48.611 "write": true, 00:17:48.611 "unmap": true, 00:17:48.611 "write_zeroes": true, 00:17:48.611 "flush": false, 00:17:48.611 "reset": true, 00:17:48.611 "compare": false, 00:17:48.611 "compare_and_write": false, 00:17:48.611 "abort": false, 00:17:48.611 "nvme_admin": false, 00:17:48.611 "nvme_io": false 00:17:48.611 }, 00:17:48.611 "driver_specific": { 00:17:48.611 "lvol": { 00:17:48.611 "lvol_store_uuid": "9b54fc42-63e5-4da0-87a7-adbc1eebe43b", 00:17:48.611 "base_bdev": "nvme0n1", 00:17:48.611 "thin_provision": true, 00:17:48.611 "num_allocated_clusters": 0, 00:17:48.611 "snapshot": false, 00:17:48.611 "clone": false, 00:17:48.611 "esnap_clone": false 00:17:48.611 } 00:17:48.611 } 00:17:48.611 } 00:17:48.611 ]' 00:17:48.611 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:48.611 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:48.611 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:48.611 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:48.611 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:48.611 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:17:48.611 15:53:23 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:48.611 15:53:23 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:48.869 15:53:23 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:48.869 15:53:23 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 57072433-e56d-4800-a05d-25eca1e3a1d2 00:17:48.869 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=57072433-e56d-4800-a05d-25eca1e3a1d2 00:17:48.869 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:48.869 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:17:48.869 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:17:48.869 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57072433-e56d-4800-a05d-25eca1e3a1d2 00:17:49.128 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:49.128 { 00:17:49.128 "name": "57072433-e56d-4800-a05d-25eca1e3a1d2", 00:17:49.128 "aliases": [ 00:17:49.128 "lvs/nvme0n1p0" 00:17:49.128 ], 00:17:49.128 "product_name": "Logical Volume", 00:17:49.128 "block_size": 4096, 00:17:49.128 "num_blocks": 26476544, 00:17:49.128 "uuid": "57072433-e56d-4800-a05d-25eca1e3a1d2", 00:17:49.128 "assigned_rate_limits": { 00:17:49.128 "rw_ios_per_sec": 0, 00:17:49.128 "rw_mbytes_per_sec": 0, 00:17:49.128 "r_mbytes_per_sec": 0, 00:17:49.128 "w_mbytes_per_sec": 0 00:17:49.128 }, 00:17:49.128 "claimed": false, 00:17:49.128 "zoned": false, 00:17:49.128 "supported_io_types": { 00:17:49.128 "read": true, 00:17:49.128 "write": true, 00:17:49.128 "unmap": true, 00:17:49.128 "write_zeroes": true, 00:17:49.128 "flush": false, 00:17:49.128 "reset": true, 00:17:49.128 "compare": false, 00:17:49.128 "compare_and_write": false, 00:17:49.128 "abort": false, 00:17:49.128 "nvme_admin": false, 00:17:49.128 "nvme_io": false 00:17:49.128 }, 00:17:49.128 "driver_specific": { 00:17:49.128 "lvol": { 00:17:49.128 "lvol_store_uuid": "9b54fc42-63e5-4da0-87a7-adbc1eebe43b", 00:17:49.128 "base_bdev": "nvme0n1", 00:17:49.128 "thin_provision": true, 00:17:49.128 "num_allocated_clusters": 0, 00:17:49.128 "snapshot": false, 00:17:49.128 "clone": false, 00:17:49.128 "esnap_clone": false 00:17:49.129 } 00:17:49.129 } 00:17:49.129 } 00:17:49.129 ]' 00:17:49.129 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:49.129 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:17:49.129 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:49.129 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:49.129 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:49.129 15:53:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:17:49.129 15:53:23 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:49.129 15:53:23 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 57072433-e56d-4800-a05d-25eca1e3a1d2 --l2p_dram_limit 10' 00:17:49.129 15:53:23 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:49.129 15:53:23 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:49.129 15:53:23 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:49.129 15:53:23 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:49.129 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:49.129 15:53:23 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 57072433-e56d-4800-a05d-25eca1e3a1d2 --l2p_dram_limit 10 -c nvc0n1p0 00:17:49.129 [2024-07-20 15:53:23.910600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.129 [2024-07-20 15:53:23.910649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:49.129 [2024-07-20 15:53:23.910667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:49.129 [2024-07-20 15:53:23.910684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.129 [2024-07-20 15:53:23.910757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.129 [2024-07-20 15:53:23.910770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.129 [2024-07-20 15:53:23.910783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:49.129 [2024-07-20 15:53:23.910796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.129 [2024-07-20 15:53:23.910824] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:49.129 [2024-07-20 15:53:23.911110] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:49.129 [2024-07-20 15:53:23.911133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.129 [2024-07-20 15:53:23.911146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.129 [2024-07-20 15:53:23.911160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:17:49.129 [2024-07-20 15:53:23.911170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.129 [2024-07-20 15:53:23.911244] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3b248676-7b24-4869-bea0-4b1e4fa616c3 00:17:49.129 [2024-07-20 15:53:23.912635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.129 [2024-07-20 15:53:23.912673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:49.129 [2024-07-20 15:53:23.912685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:49.129 [2024-07-20 15:53:23.912702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.129 [2024-07-20 15:53:23.920040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.129 [2024-07-20 15:53:23.920076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.129 [2024-07-20 15:53:23.920088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.309 ms 00:17:49.129 [2024-07-20 15:53:23.920101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.129 [2024-07-20 15:53:23.920184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.129 [2024-07-20 15:53:23.920205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.129 [2024-07-20 15:53:23.920222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:49.129 [2024-07-20 15:53:23.920236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.129 [2024-07-20 15:53:23.920312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.129 [2024-07-20 15:53:23.920327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:49.129 [2024-07-20 15:53:23.920337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:49.129 [2024-07-20 15:53:23.920350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.129 [2024-07-20 15:53:23.920399] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:49.129 [2024-07-20 15:53:23.922190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.129 [2024-07-20 15:53:23.922221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.129 [2024-07-20 15:53:23.922244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.798 ms 00:17:49.129 [2024-07-20 15:53:23.922254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.129 [2024-07-20 15:53:23.922301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.129 [2024-07-20 15:53:23.922313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:49.129 [2024-07-20 15:53:23.922326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:49.129 [2024-07-20 15:53:23.922336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.129 [2024-07-20 15:53:23.922373] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:49.129 [2024-07-20 15:53:23.922534] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:49.129 [2024-07-20 15:53:23.922552] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:49.129 [2024-07-20 15:53:23.922565] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:49.129 [2024-07-20 15:53:23.922581] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:49.129 [2024-07-20 15:53:23.922593] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:49.129 [2024-07-20 15:53:23.922606] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:49.129 [2024-07-20 15:53:23.922616] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:49.129 [2024-07-20 15:53:23.922632] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:49.129 [2024-07-20 15:53:23.922641] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:49.129 [2024-07-20 15:53:23.922655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.129 [2024-07-20 15:53:23.922664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:49.129 [2024-07-20 15:53:23.922684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:17:49.129 [2024-07-20 15:53:23.922694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.129 [2024-07-20 15:53:23.922767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.129 [2024-07-20 15:53:23.922777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:49.129 [2024-07-20 15:53:23.922793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:49.129 [2024-07-20 15:53:23.922803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.129 [2024-07-20 15:53:23.922891] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:49.129 [2024-07-20 15:53:23.922903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:49.129 [2024-07-20 15:53:23.922916] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:49.129 [2024-07-20 15:53:23.922926] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.129 [2024-07-20 15:53:23.922939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:49.129 [2024-07-20 15:53:23.922948] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:49.129 [2024-07-20 15:53:23.922959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:49.129 [2024-07-20 15:53:23.922968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:49.129 [2024-07-20 15:53:23.922980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:49.129 [2024-07-20 15:53:23.922989] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:49.129 [2024-07-20 15:53:23.923001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:49.129 [2024-07-20 15:53:23.923010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:49.129 [2024-07-20 15:53:23.923021] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:49.129 [2024-07-20 15:53:23.923031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:49.129 [2024-07-20 15:53:23.923045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:49.129 [2024-07-20 15:53:23.923054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.129 [2024-07-20 15:53:23.923066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:49.129 [2024-07-20 15:53:23.923076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:49.129 [2024-07-20 15:53:23.923088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.129 [2024-07-20 15:53:23.923097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:49.129 [2024-07-20 15:53:23.923108] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:49.387 [2024-07-20 15:53:23.923117] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.387 [2024-07-20 15:53:23.923129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:49.387 [2024-07-20 15:53:23.923138] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:49.387 [2024-07-20 15:53:23.923151] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.387 [2024-07-20 15:53:23.923160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:49.387 [2024-07-20 15:53:23.923172] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:49.387 [2024-07-20 15:53:23.923181] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.387 [2024-07-20 15:53:23.923192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:49.387 [2024-07-20 15:53:23.923201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:49.387 [2024-07-20 15:53:23.923215] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.387 [2024-07-20 15:53:23.923224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:49.387 [2024-07-20 15:53:23.923236] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:49.387 [2024-07-20 15:53:23.923245] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:49.387 [2024-07-20 15:53:23.923256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:49.387 [2024-07-20 15:53:23.923265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:49.387 [2024-07-20 15:53:23.923276] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:49.387 [2024-07-20 15:53:23.923285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:49.387 [2024-07-20 15:53:23.923297] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:49.387 [2024-07-20 15:53:23.923305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.387 [2024-07-20 15:53:23.923317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:49.387 [2024-07-20 15:53:23.923326] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:49.387 [2024-07-20 15:53:23.923337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.387 [2024-07-20 15:53:23.923346] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:49.387 [2024-07-20 15:53:23.923372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:49.387 [2024-07-20 15:53:23.923382] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:49.387 [2024-07-20 15:53:23.923398] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.387 [2024-07-20 15:53:23.923411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:49.387 [2024-07-20 15:53:23.923424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:49.387 [2024-07-20 15:53:23.923434] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:49.387 [2024-07-20 15:53:23.923446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:49.387 [2024-07-20 15:53:23.923455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:49.387 [2024-07-20 15:53:23.923467] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:49.387 [2024-07-20 15:53:23.923481] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:49.387 [2024-07-20 15:53:23.923495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:49.387 [2024-07-20 15:53:23.923517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:49.387 [2024-07-20 15:53:23.923530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:49.387 [2024-07-20 15:53:23.923542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:49.387 [2024-07-20 15:53:23.923554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:49.388 [2024-07-20 15:53:23.923564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:49.388 [2024-07-20 15:53:23.923576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:49.388 [2024-07-20 15:53:23.923587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:49.388 [2024-07-20 15:53:23.923602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:49.388 [2024-07-20 15:53:23.923612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:49.388 [2024-07-20 15:53:23.923625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:49.388 [2024-07-20 15:53:23.923635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:49.388 [2024-07-20 15:53:23.923647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:49.388 [2024-07-20 15:53:23.923657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:49.388 [2024-07-20 15:53:23.923670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:49.388 [2024-07-20 15:53:23.923680] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:49.388 [2024-07-20 15:53:23.923695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:49.388 [2024-07-20 15:53:23.923707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:49.388 [2024-07-20 15:53:23.923720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:49.388 [2024-07-20 15:53:23.923730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:49.388 [2024-07-20 15:53:23.923743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:49.388 [2024-07-20 15:53:23.923754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.388 [2024-07-20 15:53:23.923767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:49.388 [2024-07-20 15:53:23.923784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:17:49.388 [2024-07-20 15:53:23.923799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.388 [2024-07-20 15:53:23.923841] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:49.388 [2024-07-20 15:53:23.923856] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:52.670 [2024-07-20 15:53:27.275912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.275999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:52.670 [2024-07-20 15:53:27.276023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3357.511 ms 00:17:52.670 [2024-07-20 15:53:27.276036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.287016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.287064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.670 [2024-07-20 15:53:27.287080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.905 ms 00:17:52.670 [2024-07-20 15:53:27.287110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.287232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.287255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:52.670 [2024-07-20 15:53:27.287267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:52.670 [2024-07-20 15:53:27.287279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.297888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.297928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.670 [2024-07-20 15:53:27.297942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.584 ms 00:17:52.670 [2024-07-20 15:53:27.297971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.298010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.298024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.670 [2024-07-20 15:53:27.298035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:52.670 [2024-07-20 15:53:27.298047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.298524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.298543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.670 [2024-07-20 15:53:27.298554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:17:52.670 [2024-07-20 15:53:27.298566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.298670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.298691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.670 [2024-07-20 15:53:27.298708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:52.670 [2024-07-20 15:53:27.298723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.305728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.305764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.670 [2024-07-20 15:53:27.305777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.996 ms 00:17:52.670 [2024-07-20 15:53:27.305790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.313428] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:52.670 [2024-07-20 15:53:27.316580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.316608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:52.670 [2024-07-20 15:53:27.316623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.734 ms 00:17:52.670 [2024-07-20 15:53:27.316633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.400220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.400289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:52.670 [2024-07-20 15:53:27.400307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.684 ms 00:17:52.670 [2024-07-20 15:53:27.400337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.400553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.400568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:52.670 [2024-07-20 15:53:27.400591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:17:52.670 [2024-07-20 15:53:27.400601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.404058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.404093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:52.670 [2024-07-20 15:53:27.404108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.422 ms 00:17:52.670 [2024-07-20 15:53:27.404138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.407127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.407161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:52.670 [2024-07-20 15:53:27.407177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.945 ms 00:17:52.670 [2024-07-20 15:53:27.407187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.407479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.407494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:52.670 [2024-07-20 15:53:27.407508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:17:52.670 [2024-07-20 15:53:27.407518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.445509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.445548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:52.670 [2024-07-20 15:53:27.445566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.023 ms 00:17:52.670 [2024-07-20 15:53:27.445587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.449992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.450026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:52.670 [2024-07-20 15:53:27.450041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.371 ms 00:17:52.670 [2024-07-20 15:53:27.450067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.453112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.453145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:52.670 [2024-07-20 15:53:27.453160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.010 ms 00:17:52.670 [2024-07-20 15:53:27.453169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.456780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.456812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:52.670 [2024-07-20 15:53:27.456827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.578 ms 00:17:52.670 [2024-07-20 15:53:27.456853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.456904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.456917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:52.670 [2024-07-20 15:53:27.456931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:52.670 [2024-07-20 15:53:27.456941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.457004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.670 [2024-07-20 15:53:27.457015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:52.670 [2024-07-20 15:53:27.457028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:52.670 [2024-07-20 15:53:27.457037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.670 [2024-07-20 15:53:27.458123] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3552.871 ms, result 0 00:17:52.670 { 00:17:52.670 "name": "ftl0", 00:17:52.670 "uuid": "3b248676-7b24-4869-bea0-4b1e4fa616c3" 00:17:52.670 } 00:17:52.928 15:53:27 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:52.928 15:53:27 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:52.928 15:53:27 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:52.928 15:53:27 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:53.188 [2024-07-20 15:53:27.838093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.188 [2024-07-20 15:53:27.838144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:53.188 [2024-07-20 15:53:27.838158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:53.188 [2024-07-20 15:53:27.838190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.188 [2024-07-20 15:53:27.838215] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:53.188 [2024-07-20 15:53:27.838940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.188 [2024-07-20 15:53:27.838967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:53.188 [2024-07-20 15:53:27.838985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:17:53.188 [2024-07-20 15:53:27.838995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.188 [2024-07-20 15:53:27.839219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.188 [2024-07-20 15:53:27.839231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:53.188 [2024-07-20 15:53:27.839243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:17:53.188 [2024-07-20 15:53:27.839253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.189 [2024-07-20 15:53:27.841776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.189 [2024-07-20 15:53:27.841795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:53.189 [2024-07-20 15:53:27.841809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:17:53.189 [2024-07-20 15:53:27.841828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.189 [2024-07-20 15:53:27.846905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.189 [2024-07-20 15:53:27.846938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:53.189 [2024-07-20 15:53:27.846953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.061 ms 00:17:53.189 [2024-07-20 15:53:27.846963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.189 [2024-07-20 15:53:27.848615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.189 [2024-07-20 15:53:27.848650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:53.189 [2024-07-20 15:53:27.848668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:17:53.189 [2024-07-20 15:53:27.848678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.189 [2024-07-20 15:53:27.853444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.189 [2024-07-20 15:53:27.853480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:53.189 [2024-07-20 15:53:27.853495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.736 ms 00:17:53.189 [2024-07-20 15:53:27.853522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.189 [2024-07-20 15:53:27.853635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.189 [2024-07-20 15:53:27.853647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:53.189 [2024-07-20 15:53:27.853662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:53.189 [2024-07-20 15:53:27.853675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.189 [2024-07-20 15:53:27.855605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.189 [2024-07-20 15:53:27.855637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:53.189 [2024-07-20 15:53:27.855651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.910 ms 00:17:53.189 [2024-07-20 15:53:27.855661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.189 [2024-07-20 15:53:27.857085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.189 [2024-07-20 15:53:27.857118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:53.189 [2024-07-20 15:53:27.857135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:17:53.189 [2024-07-20 15:53:27.857145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.189 [2024-07-20 15:53:27.858149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.189 [2024-07-20 15:53:27.858180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:53.189 [2024-07-20 15:53:27.858195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:17:53.189 [2024-07-20 15:53:27.858204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.189 [2024-07-20 15:53:27.859315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.189 [2024-07-20 15:53:27.859347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:53.189 [2024-07-20 15:53:27.859374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:17:53.189 [2024-07-20 15:53:27.859384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.189 [2024-07-20 15:53:27.859418] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:53.189 [2024-07-20 15:53:27.859435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.859992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:53.189 [2024-07-20 15:53:27.860217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:53.190 [2024-07-20 15:53:27.860673] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:53.190 [2024-07-20 15:53:27.860704] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3b248676-7b24-4869-bea0-4b1e4fa616c3 00:17:53.190 [2024-07-20 15:53:27.860715] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:53.190 [2024-07-20 15:53:27.860727] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:53.190 [2024-07-20 15:53:27.860737] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:53.190 [2024-07-20 15:53:27.860750] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:53.190 [2024-07-20 15:53:27.860759] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:53.190 [2024-07-20 15:53:27.860772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:53.190 [2024-07-20 15:53:27.860785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:53.190 [2024-07-20 15:53:27.860797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:53.190 [2024-07-20 15:53:27.860806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:53.190 [2024-07-20 15:53:27.860818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.190 [2024-07-20 15:53:27.860828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:53.190 [2024-07-20 15:53:27.860841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.404 ms 00:17:53.190 [2024-07-20 15:53:27.860851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.862603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.190 [2024-07-20 15:53:27.862624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:53.190 [2024-07-20 15:53:27.862640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:17:53.190 [2024-07-20 15:53:27.862650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.862759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.190 [2024-07-20 15:53:27.862771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:53.190 [2024-07-20 15:53:27.862784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:53.190 [2024-07-20 15:53:27.862793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.869793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.190 [2024-07-20 15:53:27.869822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:53.190 [2024-07-20 15:53:27.869837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.190 [2024-07-20 15:53:27.869858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.869910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.190 [2024-07-20 15:53:27.869921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:53.190 [2024-07-20 15:53:27.869934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.190 [2024-07-20 15:53:27.869943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.870011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.190 [2024-07-20 15:53:27.870024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:53.190 [2024-07-20 15:53:27.870039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.190 [2024-07-20 15:53:27.870048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.870071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.190 [2024-07-20 15:53:27.870082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:53.190 [2024-07-20 15:53:27.870094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.190 [2024-07-20 15:53:27.870103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.881398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.190 [2024-07-20 15:53:27.881442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:53.190 [2024-07-20 15:53:27.881458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.190 [2024-07-20 15:53:27.881484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.889647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.190 [2024-07-20 15:53:27.889680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:53.190 [2024-07-20 15:53:27.889695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.190 [2024-07-20 15:53:27.889705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.889779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.190 [2024-07-20 15:53:27.889790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:53.190 [2024-07-20 15:53:27.889806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.190 [2024-07-20 15:53:27.889815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.889855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.190 [2024-07-20 15:53:27.889868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:53.190 [2024-07-20 15:53:27.889881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.190 [2024-07-20 15:53:27.889889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.889966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.190 [2024-07-20 15:53:27.889977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:53.190 [2024-07-20 15:53:27.889990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.190 [2024-07-20 15:53:27.889999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.890036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.190 [2024-07-20 15:53:27.890047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:53.190 [2024-07-20 15:53:27.890062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.190 [2024-07-20 15:53:27.890072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.890117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.190 [2024-07-20 15:53:27.890127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:53.190 [2024-07-20 15:53:27.890141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.190 [2024-07-20 15:53:27.890150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.890195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.190 [2024-07-20 15:53:27.890208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:53.190 [2024-07-20 15:53:27.890220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.190 [2024-07-20 15:53:27.890229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.190 [2024-07-20 15:53:27.890402] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.332 ms, result 0 00:17:53.190 true 00:17:53.190 15:53:27 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 89414 00:17:53.190 15:53:27 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 89414 ']' 00:17:53.190 15:53:27 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 89414 00:17:53.191 15:53:27 ftl.ftl_restore -- common/autotest_common.sh@951 -- # uname 00:17:53.191 15:53:27 ftl.ftl_restore -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:53.191 15:53:27 ftl.ftl_restore -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89414 00:17:53.191 killing process with pid 89414 00:17:53.191 15:53:27 ftl.ftl_restore -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:53.191 15:53:27 ftl.ftl_restore -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:53.191 15:53:27 ftl.ftl_restore -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89414' 00:17:53.191 15:53:27 ftl.ftl_restore -- common/autotest_common.sh@965 -- # kill 89414 00:17:53.191 15:53:27 ftl.ftl_restore -- common/autotest_common.sh@970 -- # wait 89414 00:17:56.492 15:53:30 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:59.772 262144+0 records in 00:17:59.772 262144+0 records out 00:17:59.772 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.70292 s, 290 MB/s 00:17:59.772 15:53:34 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:01.674 15:53:36 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:01.674 [2024-07-20 15:53:36.183537] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:01.674 [2024-07-20 15:53:36.183654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89611 ] 00:18:01.674 [2024-07-20 15:53:36.331194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.674 [2024-07-20 15:53:36.371969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.935 [2024-07-20 15:53:36.471770] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:01.935 [2024-07-20 15:53:36.471837] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:01.935 [2024-07-20 15:53:36.622509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.935 [2024-07-20 15:53:36.622556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:01.935 [2024-07-20 15:53:36.622572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:01.935 [2024-07-20 15:53:36.622581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.935 [2024-07-20 15:53:36.622625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.935 [2024-07-20 15:53:36.622636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:01.935 [2024-07-20 15:53:36.622646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:01.935 [2024-07-20 15:53:36.622658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.935 [2024-07-20 15:53:36.622677] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:01.935 [2024-07-20 15:53:36.622889] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:01.935 [2024-07-20 15:53:36.622908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.935 [2024-07-20 15:53:36.622921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:01.935 [2024-07-20 15:53:36.622932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:18:01.935 [2024-07-20 15:53:36.622941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.935 [2024-07-20 15:53:36.624322] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:01.935 [2024-07-20 15:53:36.626812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.935 [2024-07-20 15:53:36.626849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:01.935 [2024-07-20 15:53:36.626866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.496 ms 00:18:01.935 [2024-07-20 15:53:36.626877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.935 [2024-07-20 15:53:36.626930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.935 [2024-07-20 15:53:36.626942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:01.935 [2024-07-20 15:53:36.626953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:01.935 [2024-07-20 15:53:36.626970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.935 [2024-07-20 15:53:36.633460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.935 [2024-07-20 15:53:36.633487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:01.935 [2024-07-20 15:53:36.633498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.442 ms 00:18:01.935 [2024-07-20 15:53:36.633514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.935 [2024-07-20 15:53:36.633601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.935 [2024-07-20 15:53:36.633613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:01.935 [2024-07-20 15:53:36.633629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:01.935 [2024-07-20 15:53:36.633638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.935 [2024-07-20 15:53:36.633689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.935 [2024-07-20 15:53:36.633708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:01.935 [2024-07-20 15:53:36.633724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:01.935 [2024-07-20 15:53:36.633732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.935 [2024-07-20 15:53:36.633756] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:01.935 [2024-07-20 15:53:36.635359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.935 [2024-07-20 15:53:36.635386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:01.935 [2024-07-20 15:53:36.635398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:18:01.935 [2024-07-20 15:53:36.635407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.935 [2024-07-20 15:53:36.635438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.935 [2024-07-20 15:53:36.635448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:01.935 [2024-07-20 15:53:36.635469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:01.935 [2024-07-20 15:53:36.635478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.935 [2024-07-20 15:53:36.635500] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:01.935 [2024-07-20 15:53:36.635523] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:01.935 [2024-07-20 15:53:36.635566] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:01.935 [2024-07-20 15:53:36.635585] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:01.936 [2024-07-20 15:53:36.635669] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:01.936 [2024-07-20 15:53:36.635686] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:01.936 [2024-07-20 15:53:36.635701] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:01.936 [2024-07-20 15:53:36.635715] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:01.936 [2024-07-20 15:53:36.635726] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:01.936 [2024-07-20 15:53:36.635737] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:01.936 [2024-07-20 15:53:36.635746] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:01.936 [2024-07-20 15:53:36.635755] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:01.936 [2024-07-20 15:53:36.635765] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:01.936 [2024-07-20 15:53:36.635775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.936 [2024-07-20 15:53:36.635784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:01.936 [2024-07-20 15:53:36.635794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:18:01.936 [2024-07-20 15:53:36.635805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.936 [2024-07-20 15:53:36.635871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.936 [2024-07-20 15:53:36.635881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:01.936 [2024-07-20 15:53:36.635891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:01.936 [2024-07-20 15:53:36.635900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.936 [2024-07-20 15:53:36.635981] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:01.936 [2024-07-20 15:53:36.635992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:01.936 [2024-07-20 15:53:36.636009] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:01.936 [2024-07-20 15:53:36.636019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.936 [2024-07-20 15:53:36.636034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:01.936 [2024-07-20 15:53:36.636043] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:01.936 [2024-07-20 15:53:36.636053] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:01.936 [2024-07-20 15:53:36.636062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:01.936 [2024-07-20 15:53:36.636071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:01.936 [2024-07-20 15:53:36.636080] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:01.936 [2024-07-20 15:53:36.636089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:01.936 [2024-07-20 15:53:36.636099] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:01.936 [2024-07-20 15:53:36.636107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:01.936 [2024-07-20 15:53:36.636117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:01.936 [2024-07-20 15:53:36.636126] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:01.936 [2024-07-20 15:53:36.636135] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.936 [2024-07-20 15:53:36.636147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:01.936 [2024-07-20 15:53:36.636156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:01.936 [2024-07-20 15:53:36.636165] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.936 [2024-07-20 15:53:36.636174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:01.936 [2024-07-20 15:53:36.636184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:01.936 [2024-07-20 15:53:36.636192] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.936 [2024-07-20 15:53:36.636201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:01.936 [2024-07-20 15:53:36.636210] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:01.936 [2024-07-20 15:53:36.636218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.936 [2024-07-20 15:53:36.636227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:01.936 [2024-07-20 15:53:36.636236] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:01.936 [2024-07-20 15:53:36.636244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.936 [2024-07-20 15:53:36.636253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:01.936 [2024-07-20 15:53:36.636262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:01.936 [2024-07-20 15:53:36.636271] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:01.936 [2024-07-20 15:53:36.636279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:01.936 [2024-07-20 15:53:36.636295] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:01.936 [2024-07-20 15:53:36.636304] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:01.936 [2024-07-20 15:53:36.636313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:01.936 [2024-07-20 15:53:36.636322] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:01.936 [2024-07-20 15:53:36.636332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:01.936 [2024-07-20 15:53:36.636341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:01.936 [2024-07-20 15:53:36.636350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:01.936 [2024-07-20 15:53:36.636391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.936 [2024-07-20 15:53:36.636401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:01.936 [2024-07-20 15:53:36.636410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:01.936 [2024-07-20 15:53:36.636419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.936 [2024-07-20 15:53:36.636427] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:01.936 [2024-07-20 15:53:36.636444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:01.936 [2024-07-20 15:53:36.636453] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:01.936 [2024-07-20 15:53:36.636469] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:01.936 [2024-07-20 15:53:36.636478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:01.936 [2024-07-20 15:53:36.636490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:01.936 [2024-07-20 15:53:36.636500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:01.936 [2024-07-20 15:53:36.636509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:01.936 [2024-07-20 15:53:36.636517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:01.936 [2024-07-20 15:53:36.636526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:01.936 [2024-07-20 15:53:36.636536] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:01.936 [2024-07-20 15:53:36.636548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:01.936 [2024-07-20 15:53:36.636559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:01.936 [2024-07-20 15:53:36.636569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:01.936 [2024-07-20 15:53:36.636579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:01.936 [2024-07-20 15:53:36.636589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:01.936 [2024-07-20 15:53:36.636600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:01.936 [2024-07-20 15:53:36.636610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:01.936 [2024-07-20 15:53:36.636619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:01.936 [2024-07-20 15:53:36.636630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:01.936 [2024-07-20 15:53:36.636640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:01.936 [2024-07-20 15:53:36.636652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:01.936 [2024-07-20 15:53:36.636663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:01.936 [2024-07-20 15:53:36.636672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:01.936 [2024-07-20 15:53:36.636682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:01.936 [2024-07-20 15:53:36.636695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:01.936 [2024-07-20 15:53:36.636705] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:01.936 [2024-07-20 15:53:36.636715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:01.936 [2024-07-20 15:53:36.636726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:01.936 [2024-07-20 15:53:36.636736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:01.936 [2024-07-20 15:53:36.636754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:01.936 [2024-07-20 15:53:36.636764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:01.936 [2024-07-20 15:53:36.636774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.936 [2024-07-20 15:53:36.636785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:01.936 [2024-07-20 15:53:36.636794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:18:01.936 [2024-07-20 15:53:36.636807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.936 [2024-07-20 15:53:36.657126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.936 [2024-07-20 15:53:36.657214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:01.936 [2024-07-20 15:53:36.657255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.294 ms 00:18:01.936 [2024-07-20 15:53:36.657311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.936 [2024-07-20 15:53:36.657571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.936 [2024-07-20 15:53:36.657629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:01.936 [2024-07-20 15:53:36.657661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:18:01.936 [2024-07-20 15:53:36.657717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.936 [2024-07-20 15:53:36.673892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.936 [2024-07-20 15:53:36.673943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:01.936 [2024-07-20 15:53:36.673979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.045 ms 00:18:01.937 [2024-07-20 15:53:36.673996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.937 [2024-07-20 15:53:36.674049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.937 [2024-07-20 15:53:36.674068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:01.937 [2024-07-20 15:53:36.674086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:01.937 [2024-07-20 15:53:36.674110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.937 [2024-07-20 15:53:36.674703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.937 [2024-07-20 15:53:36.674732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:01.937 [2024-07-20 15:53:36.674751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:18:01.937 [2024-07-20 15:53:36.674769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.937 [2024-07-20 15:53:36.674953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.937 [2024-07-20 15:53:36.674988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:01.937 [2024-07-20 15:53:36.675006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:18:01.937 [2024-07-20 15:53:36.675023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.937 [2024-07-20 15:53:36.682187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.937 [2024-07-20 15:53:36.682222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:01.937 [2024-07-20 15:53:36.682246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.137 ms 00:18:01.937 [2024-07-20 15:53:36.682258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.937 [2024-07-20 15:53:36.685032] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:01.937 [2024-07-20 15:53:36.685072] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:01.937 [2024-07-20 15:53:36.685089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.937 [2024-07-20 15:53:36.685102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:01.937 [2024-07-20 15:53:36.685114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:18:01.937 [2024-07-20 15:53:36.685126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.937 [2024-07-20 15:53:36.698106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.937 [2024-07-20 15:53:36.698145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:01.937 [2024-07-20 15:53:36.698158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.949 ms 00:18:01.937 [2024-07-20 15:53:36.698183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.937 [2024-07-20 15:53:36.699830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.937 [2024-07-20 15:53:36.699861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:01.937 [2024-07-20 15:53:36.699873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:18:01.937 [2024-07-20 15:53:36.699883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.937 [2024-07-20 15:53:36.701436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.937 [2024-07-20 15:53:36.701465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:01.937 [2024-07-20 15:53:36.701476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.520 ms 00:18:01.937 [2024-07-20 15:53:36.701486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.937 [2024-07-20 15:53:36.701759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.937 [2024-07-20 15:53:36.701774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:01.937 [2024-07-20 15:53:36.701785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:18:01.937 [2024-07-20 15:53:36.701795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.937 [2024-07-20 15:53:36.721675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:01.937 [2024-07-20 15:53:36.721741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:01.937 [2024-07-20 15:53:36.721758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.880 ms 00:18:01.937 [2024-07-20 15:53:36.721768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:01.937 [2024-07-20 15:53:36.728031] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:02.196 [2024-07-20 15:53:36.730645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.196 [2024-07-20 15:53:36.730689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:02.196 [2024-07-20 15:53:36.730702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.846 ms 00:18:02.197 [2024-07-20 15:53:36.730712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.197 [2024-07-20 15:53:36.730761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.197 [2024-07-20 15:53:36.730773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:02.197 [2024-07-20 15:53:36.730784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:02.197 [2024-07-20 15:53:36.730794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.197 [2024-07-20 15:53:36.730873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.197 [2024-07-20 15:53:36.730885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:02.197 [2024-07-20 15:53:36.730898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:02.197 [2024-07-20 15:53:36.730911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.197 [2024-07-20 15:53:36.730933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.197 [2024-07-20 15:53:36.730951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:02.197 [2024-07-20 15:53:36.730962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:02.197 [2024-07-20 15:53:36.730971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.197 [2024-07-20 15:53:36.731004] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:02.197 [2024-07-20 15:53:36.731016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.197 [2024-07-20 15:53:36.731025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:02.197 [2024-07-20 15:53:36.731035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:02.197 [2024-07-20 15:53:36.731056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.197 [2024-07-20 15:53:36.734471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.197 [2024-07-20 15:53:36.734504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:02.197 [2024-07-20 15:53:36.734517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.401 ms 00:18:02.197 [2024-07-20 15:53:36.734527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.197 [2024-07-20 15:53:36.734591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.197 [2024-07-20 15:53:36.734603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:02.197 [2024-07-20 15:53:36.734614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:02.197 [2024-07-20 15:53:36.734623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.197 [2024-07-20 15:53:36.735837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.009 ms, result 0 00:18:42.103  Copying: 25/1024 [MB] (25 MBps) Copying: 50/1024 [MB] (25 MBps) Copying: 76/1024 [MB] (25 MBps) Copying: 101/1024 [MB] (25 MBps) Copying: 127/1024 [MB] (25 MBps) Copying: 153/1024 [MB] (25 MBps) Copying: 178/1024 [MB] (25 MBps) Copying: 203/1024 [MB] (24 MBps) Copying: 227/1024 [MB] (24 MBps) Copying: 252/1024 [MB] (24 MBps) Copying: 276/1024 [MB] (24 MBps) Copying: 301/1024 [MB] (24 MBps) Copying: 325/1024 [MB] (24 MBps) Copying: 350/1024 [MB] (25 MBps) Copying: 375/1024 [MB] (24 MBps) Copying: 401/1024 [MB] (25 MBps) Copying: 426/1024 [MB] (25 MBps) Copying: 451/1024 [MB] (25 MBps) Copying: 477/1024 [MB] (25 MBps) Copying: 503/1024 [MB] (25 MBps) Copying: 528/1024 [MB] (25 MBps) Copying: 554/1024 [MB] (25 MBps) Copying: 579/1024 [MB] (24 MBps) Copying: 604/1024 [MB] (25 MBps) Copying: 630/1024 [MB] (25 MBps) Copying: 655/1024 [MB] (25 MBps) Copying: 681/1024 [MB] (25 MBps) Copying: 708/1024 [MB] (26 MBps) Copying: 734/1024 [MB] (26 MBps) Copying: 761/1024 [MB] (26 MBps) Copying: 787/1024 [MB] (26 MBps) Copying: 814/1024 [MB] (26 MBps) Copying: 840/1024 [MB] (26 MBps) Copying: 868/1024 [MB] (27 MBps) Copying: 894/1024 [MB] (26 MBps) Copying: 919/1024 [MB] (25 MBps) Copying: 945/1024 [MB] (25 MBps) Copying: 970/1024 [MB] (25 MBps) Copying: 996/1024 [MB] (25 MBps) Copying: 1021/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-20 15:54:16.796844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.103 [2024-07-20 15:54:16.796890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:42.103 [2024-07-20 15:54:16.796906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:42.103 [2024-07-20 15:54:16.796922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.103 [2024-07-20 15:54:16.796943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:42.103 [2024-07-20 15:54:16.797627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.103 [2024-07-20 15:54:16.797648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:42.103 [2024-07-20 15:54:16.797658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:18:42.103 [2024-07-20 15:54:16.797667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.103 [2024-07-20 15:54:16.799266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.103 [2024-07-20 15:54:16.799316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:42.103 [2024-07-20 15:54:16.799329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:18:42.103 [2024-07-20 15:54:16.799338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.103 [2024-07-20 15:54:16.815654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.103 [2024-07-20 15:54:16.815692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:42.103 [2024-07-20 15:54:16.815705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.309 ms 00:18:42.103 [2024-07-20 15:54:16.815724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.103 [2024-07-20 15:54:16.820533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.103 [2024-07-20 15:54:16.820563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:42.103 [2024-07-20 15:54:16.820574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.786 ms 00:18:42.103 [2024-07-20 15:54:16.820583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.103 [2024-07-20 15:54:16.822008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.103 [2024-07-20 15:54:16.822044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:42.103 [2024-07-20 15:54:16.822056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.365 ms 00:18:42.103 [2024-07-20 15:54:16.822066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.103 [2024-07-20 15:54:16.825626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.103 [2024-07-20 15:54:16.825662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:42.103 [2024-07-20 15:54:16.825684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.540 ms 00:18:42.103 [2024-07-20 15:54:16.825694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.103 [2024-07-20 15:54:16.825811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.103 [2024-07-20 15:54:16.825831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:42.103 [2024-07-20 15:54:16.825842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:42.103 [2024-07-20 15:54:16.825851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.103 [2024-07-20 15:54:16.827926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.103 [2024-07-20 15:54:16.827961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:42.103 [2024-07-20 15:54:16.827973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:18:42.103 [2024-07-20 15:54:16.827982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.103 [2024-07-20 15:54:16.829448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.103 [2024-07-20 15:54:16.829479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:42.103 [2024-07-20 15:54:16.829491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:18:42.103 [2024-07-20 15:54:16.829500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.103 [2024-07-20 15:54:16.830783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.103 [2024-07-20 15:54:16.830814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:42.103 [2024-07-20 15:54:16.830825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:18:42.103 [2024-07-20 15:54:16.830834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.103 [2024-07-20 15:54:16.831972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.103 [2024-07-20 15:54:16.832001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:42.103 [2024-07-20 15:54:16.832011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:18:42.103 [2024-07-20 15:54:16.832020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.103 [2024-07-20 15:54:16.832044] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:42.103 [2024-07-20 15:54:16.832070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:42.103 [2024-07-20 15:54:16.832082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:42.103 [2024-07-20 15:54:16.832093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:42.103 [2024-07-20 15:54:16.832103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:42.103 [2024-07-20 15:54:16.832114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:42.103 [2024-07-20 15:54:16.832124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:42.103 [2024-07-20 15:54:16.832135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:42.103 [2024-07-20 15:54:16.832145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:42.103 [2024-07-20 15:54:16.832155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.832990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:42.104 [2024-07-20 15:54:16.833247] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:42.104 [2024-07-20 15:54:16.833256] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3b248676-7b24-4869-bea0-4b1e4fa616c3 00:18:42.104 [2024-07-20 15:54:16.833274] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:42.104 [2024-07-20 15:54:16.833290] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:42.104 [2024-07-20 15:54:16.833299] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:42.104 [2024-07-20 15:54:16.833310] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:42.105 [2024-07-20 15:54:16.833319] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:42.105 [2024-07-20 15:54:16.833336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:42.105 [2024-07-20 15:54:16.833345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:42.105 [2024-07-20 15:54:16.833362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:42.105 [2024-07-20 15:54:16.833372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:42.105 [2024-07-20 15:54:16.833381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.105 [2024-07-20 15:54:16.833395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:42.105 [2024-07-20 15:54:16.833405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.340 ms 00:18:42.105 [2024-07-20 15:54:16.833414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.835066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.105 [2024-07-20 15:54:16.835085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:42.105 [2024-07-20 15:54:16.835096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.638 ms 00:18:42.105 [2024-07-20 15:54:16.835105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.835209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.105 [2024-07-20 15:54:16.835219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:42.105 [2024-07-20 15:54:16.835230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:42.105 [2024-07-20 15:54:16.835239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.841130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.105 [2024-07-20 15:54:16.841151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:42.105 [2024-07-20 15:54:16.841170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.105 [2024-07-20 15:54:16.841180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.841230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.105 [2024-07-20 15:54:16.841241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:42.105 [2024-07-20 15:54:16.841251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.105 [2024-07-20 15:54:16.841260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.841320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.105 [2024-07-20 15:54:16.841332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:42.105 [2024-07-20 15:54:16.841342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.105 [2024-07-20 15:54:16.841351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.841386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.105 [2024-07-20 15:54:16.841400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:42.105 [2024-07-20 15:54:16.841410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.105 [2024-07-20 15:54:16.841419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.852930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.105 [2024-07-20 15:54:16.852961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:42.105 [2024-07-20 15:54:16.852974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.105 [2024-07-20 15:54:16.852984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.861081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.105 [2024-07-20 15:54:16.861112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:42.105 [2024-07-20 15:54:16.861124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.105 [2024-07-20 15:54:16.861134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.861179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.105 [2024-07-20 15:54:16.861190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:42.105 [2024-07-20 15:54:16.861200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.105 [2024-07-20 15:54:16.861218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.861243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.105 [2024-07-20 15:54:16.861261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:42.105 [2024-07-20 15:54:16.861275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.105 [2024-07-20 15:54:16.861285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.861364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.105 [2024-07-20 15:54:16.861377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:42.105 [2024-07-20 15:54:16.861388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.105 [2024-07-20 15:54:16.861398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.861431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.105 [2024-07-20 15:54:16.861444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:42.105 [2024-07-20 15:54:16.861454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.105 [2024-07-20 15:54:16.861467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.861504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.105 [2024-07-20 15:54:16.861515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:42.105 [2024-07-20 15:54:16.861524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.105 [2024-07-20 15:54:16.861534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.861581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:42.105 [2024-07-20 15:54:16.861593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:42.105 [2024-07-20 15:54:16.861606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:42.105 [2024-07-20 15:54:16.861615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.105 [2024-07-20 15:54:16.861760] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.958 ms, result 0 00:18:42.673 00:18:42.673 00:18:42.673 15:54:17 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:42.673 [2024-07-20 15:54:17.331469] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:42.673 [2024-07-20 15:54:17.331610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90038 ] 00:18:42.932 [2024-07-20 15:54:17.482090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.932 [2024-07-20 15:54:17.522207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.932 [2024-07-20 15:54:17.622060] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:42.932 [2024-07-20 15:54:17.622149] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.193 [2024-07-20 15:54:17.772695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.193 [2024-07-20 15:54:17.772744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:43.193 [2024-07-20 15:54:17.772758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:43.193 [2024-07-20 15:54:17.772768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.193 [2024-07-20 15:54:17.772833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.193 [2024-07-20 15:54:17.772845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.193 [2024-07-20 15:54:17.772856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:43.193 [2024-07-20 15:54:17.772869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.193 [2024-07-20 15:54:17.772889] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:43.193 [2024-07-20 15:54:17.773184] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:43.193 [2024-07-20 15:54:17.773214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.193 [2024-07-20 15:54:17.773228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.193 [2024-07-20 15:54:17.773239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:18:43.193 [2024-07-20 15:54:17.773248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.193 [2024-07-20 15:54:17.774644] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:43.193 [2024-07-20 15:54:17.777106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.193 [2024-07-20 15:54:17.777151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:43.193 [2024-07-20 15:54:17.777184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.467 ms 00:18:43.193 [2024-07-20 15:54:17.777195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.193 [2024-07-20 15:54:17.777249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.193 [2024-07-20 15:54:17.777267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:43.193 [2024-07-20 15:54:17.777286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:43.193 [2024-07-20 15:54:17.777302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.193 [2024-07-20 15:54:17.783918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.193 [2024-07-20 15:54:17.783946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.193 [2024-07-20 15:54:17.783964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.541 ms 00:18:43.193 [2024-07-20 15:54:17.783973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.193 [2024-07-20 15:54:17.784072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.193 [2024-07-20 15:54:17.784085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.193 [2024-07-20 15:54:17.784095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:43.193 [2024-07-20 15:54:17.784111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.193 [2024-07-20 15:54:17.784165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.193 [2024-07-20 15:54:17.784177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:43.193 [2024-07-20 15:54:17.784195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:43.193 [2024-07-20 15:54:17.784204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.193 [2024-07-20 15:54:17.784229] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:43.193 [2024-07-20 15:54:17.785845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.193 [2024-07-20 15:54:17.785872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.193 [2024-07-20 15:54:17.785883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:18:43.193 [2024-07-20 15:54:17.785893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.193 [2024-07-20 15:54:17.785923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.193 [2024-07-20 15:54:17.785942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:43.193 [2024-07-20 15:54:17.785956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:43.193 [2024-07-20 15:54:17.785966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.193 [2024-07-20 15:54:17.785987] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:43.193 [2024-07-20 15:54:17.786011] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:43.193 [2024-07-20 15:54:17.786050] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:43.193 [2024-07-20 15:54:17.786069] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:43.193 [2024-07-20 15:54:17.786151] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:43.193 [2024-07-20 15:54:17.786174] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:43.193 [2024-07-20 15:54:17.786189] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:43.193 [2024-07-20 15:54:17.786202] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:43.193 [2024-07-20 15:54:17.786214] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:43.193 [2024-07-20 15:54:17.786225] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:43.193 [2024-07-20 15:54:17.786235] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:43.193 [2024-07-20 15:54:17.786244] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:43.193 [2024-07-20 15:54:17.786262] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:43.193 [2024-07-20 15:54:17.786273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.193 [2024-07-20 15:54:17.786282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:43.193 [2024-07-20 15:54:17.786299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:18:43.193 [2024-07-20 15:54:17.786312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.193 [2024-07-20 15:54:17.786388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.193 [2024-07-20 15:54:17.786407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:43.193 [2024-07-20 15:54:17.786417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:43.193 [2024-07-20 15:54:17.786426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.193 [2024-07-20 15:54:17.786511] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:43.193 [2024-07-20 15:54:17.786523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:43.193 [2024-07-20 15:54:17.786533] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.193 [2024-07-20 15:54:17.786543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.193 [2024-07-20 15:54:17.786565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:43.193 [2024-07-20 15:54:17.786575] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:43.193 [2024-07-20 15:54:17.786584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:43.193 [2024-07-20 15:54:17.786593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:43.193 [2024-07-20 15:54:17.786602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:43.193 [2024-07-20 15:54:17.786612] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.193 [2024-07-20 15:54:17.786621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:43.193 [2024-07-20 15:54:17.786630] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:43.193 [2024-07-20 15:54:17.786641] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.193 [2024-07-20 15:54:17.786650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:43.193 [2024-07-20 15:54:17.786659] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:43.193 [2024-07-20 15:54:17.786669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.193 [2024-07-20 15:54:17.786681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:43.193 [2024-07-20 15:54:17.786690] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:43.193 [2024-07-20 15:54:17.786699] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.193 [2024-07-20 15:54:17.786708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:43.193 [2024-07-20 15:54:17.786718] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:43.193 [2024-07-20 15:54:17.786727] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.193 [2024-07-20 15:54:17.786736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:43.193 [2024-07-20 15:54:17.786745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:43.194 [2024-07-20 15:54:17.786754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.194 [2024-07-20 15:54:17.786763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:43.194 [2024-07-20 15:54:17.786772] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:43.194 [2024-07-20 15:54:17.786782] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.194 [2024-07-20 15:54:17.786790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:43.194 [2024-07-20 15:54:17.786800] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:43.194 [2024-07-20 15:54:17.786809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.194 [2024-07-20 15:54:17.786817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:43.194 [2024-07-20 15:54:17.786831] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:43.194 [2024-07-20 15:54:17.786840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.194 [2024-07-20 15:54:17.786849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:43.194 [2024-07-20 15:54:17.786858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:43.194 [2024-07-20 15:54:17.786867] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.194 [2024-07-20 15:54:17.786876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:43.194 [2024-07-20 15:54:17.786884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:43.194 [2024-07-20 15:54:17.786893] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.194 [2024-07-20 15:54:17.786902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:43.194 [2024-07-20 15:54:17.786911] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:43.194 [2024-07-20 15:54:17.786920] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.194 [2024-07-20 15:54:17.786928] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:43.194 [2024-07-20 15:54:17.786939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:43.194 [2024-07-20 15:54:17.786949] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.194 [2024-07-20 15:54:17.786958] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.194 [2024-07-20 15:54:17.786969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:43.194 [2024-07-20 15:54:17.786981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:43.194 [2024-07-20 15:54:17.786990] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:43.194 [2024-07-20 15:54:17.786999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:43.194 [2024-07-20 15:54:17.787008] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:43.194 [2024-07-20 15:54:17.787018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:43.194 [2024-07-20 15:54:17.787028] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:43.194 [2024-07-20 15:54:17.787039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.194 [2024-07-20 15:54:17.787050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:43.194 [2024-07-20 15:54:17.787061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:43.194 [2024-07-20 15:54:17.787071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:43.194 [2024-07-20 15:54:17.787081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:43.194 [2024-07-20 15:54:17.787091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:43.194 [2024-07-20 15:54:17.787101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:43.194 [2024-07-20 15:54:17.787111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:43.194 [2024-07-20 15:54:17.787121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:43.194 [2024-07-20 15:54:17.787131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:43.194 [2024-07-20 15:54:17.787143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:43.194 [2024-07-20 15:54:17.787153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:43.194 [2024-07-20 15:54:17.787163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:43.194 [2024-07-20 15:54:17.787172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:43.194 [2024-07-20 15:54:17.787182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:43.194 [2024-07-20 15:54:17.787192] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:43.194 [2024-07-20 15:54:17.787203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.194 [2024-07-20 15:54:17.787214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:43.194 [2024-07-20 15:54:17.787224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:43.194 [2024-07-20 15:54:17.787242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:43.194 [2024-07-20 15:54:17.787254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:43.194 [2024-07-20 15:54:17.787264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.787275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:43.194 [2024-07-20 15:54:17.787286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:18:43.194 [2024-07-20 15:54:17.787298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.810574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.810670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.194 [2024-07-20 15:54:17.810746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.259 ms 00:18:43.194 [2024-07-20 15:54:17.810804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.811052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.811127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:43.194 [2024-07-20 15:54:17.811162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:18:43.194 [2024-07-20 15:54:17.811204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.828611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.828664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.194 [2024-07-20 15:54:17.828688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.235 ms 00:18:43.194 [2024-07-20 15:54:17.828706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.828761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.828797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.194 [2024-07-20 15:54:17.828817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:43.194 [2024-07-20 15:54:17.828841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.829437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.829473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.194 [2024-07-20 15:54:17.829493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:18:43.194 [2024-07-20 15:54:17.829510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.829706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.829743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.194 [2024-07-20 15:54:17.829763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:18:43.194 [2024-07-20 15:54:17.829781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.837332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.837387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.194 [2024-07-20 15:54:17.837404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.522 ms 00:18:43.194 [2024-07-20 15:54:17.837416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.840253] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:43.194 [2024-07-20 15:54:17.840295] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:43.194 [2024-07-20 15:54:17.840317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.840330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:43.194 [2024-07-20 15:54:17.840344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.795 ms 00:18:43.194 [2024-07-20 15:54:17.840376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.853309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.853345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:43.194 [2024-07-20 15:54:17.853387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.905 ms 00:18:43.194 [2024-07-20 15:54:17.853397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.855012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.855044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:43.194 [2024-07-20 15:54:17.855056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:18:43.194 [2024-07-20 15:54:17.855065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.856497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.856528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:43.194 [2024-07-20 15:54:17.856539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:18:43.194 [2024-07-20 15:54:17.856548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.856815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.856835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:43.194 [2024-07-20 15:54:17.856846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:18:43.194 [2024-07-20 15:54:17.856856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.876546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.876598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:43.194 [2024-07-20 15:54:17.876614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.695 ms 00:18:43.194 [2024-07-20 15:54:17.876624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.882797] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:43.194 [2024-07-20 15:54:17.885232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.885263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:43.194 [2024-07-20 15:54:17.885276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.577 ms 00:18:43.194 [2024-07-20 15:54:17.885286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.885336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.885348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:43.194 [2024-07-20 15:54:17.885385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:43.194 [2024-07-20 15:54:17.885395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.885476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.885491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:43.194 [2024-07-20 15:54:17.885513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:43.194 [2024-07-20 15:54:17.885522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.885553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.885566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:43.194 [2024-07-20 15:54:17.885583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:43.194 [2024-07-20 15:54:17.885592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.885626] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:43.194 [2024-07-20 15:54:17.885638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.885648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:43.194 [2024-07-20 15:54:17.885671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:43.194 [2024-07-20 15:54:17.885680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.889110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.889146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:43.194 [2024-07-20 15:54:17.889167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.417 ms 00:18:43.194 [2024-07-20 15:54:17.889177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.889238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.194 [2024-07-20 15:54:17.889250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:43.194 [2024-07-20 15:54:17.889260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:43.194 [2024-07-20 15:54:17.889275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.194 [2024-07-20 15:54:17.890485] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.534 ms, result 0 00:19:21.932  Copying: 25/1024 [MB] (25 MBps) Copying: 51/1024 [MB] (25 MBps) Copying: 77/1024 [MB] (25 MBps) Copying: 104/1024 [MB] (27 MBps) Copying: 131/1024 [MB] (26 MBps) Copying: 157/1024 [MB] (25 MBps) Copying: 183/1024 [MB] (26 MBps) Copying: 210/1024 [MB] (26 MBps) Copying: 236/1024 [MB] (26 MBps) Copying: 262/1024 [MB] (26 MBps) Copying: 289/1024 [MB] (26 MBps) Copying: 314/1024 [MB] (25 MBps) Copying: 343/1024 [MB] (28 MBps) Copying: 369/1024 [MB] (26 MBps) Copying: 397/1024 [MB] (27 MBps) Copying: 423/1024 [MB] (26 MBps) Copying: 450/1024 [MB] (26 MBps) Copying: 476/1024 [MB] (26 MBps) Copying: 502/1024 [MB] (26 MBps) Copying: 529/1024 [MB] (26 MBps) Copying: 555/1024 [MB] (26 MBps) Copying: 582/1024 [MB] (26 MBps) Copying: 609/1024 [MB] (26 MBps) Copying: 636/1024 [MB] (26 MBps) Copying: 663/1024 [MB] (27 MBps) Copying: 690/1024 [MB] (27 MBps) Copying: 717/1024 [MB] (26 MBps) Copying: 744/1024 [MB] (26 MBps) Copying: 771/1024 [MB] (27 MBps) Copying: 798/1024 [MB] (26 MBps) Copying: 825/1024 [MB] (26 MBps) Copying: 852/1024 [MB] (27 MBps) Copying: 879/1024 [MB] (27 MBps) Copying: 906/1024 [MB] (26 MBps) Copying: 933/1024 [MB] (27 MBps) Copying: 959/1024 [MB] (26 MBps) Copying: 986/1024 [MB] (26 MBps) Copying: 1011/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-20 15:54:56.707217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.932 [2024-07-20 15:54:56.707307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:21.932 [2024-07-20 15:54:56.707349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:21.932 [2024-07-20 15:54:56.707384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.932 [2024-07-20 15:54:56.707421] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:21.932 [2024-07-20 15:54:56.708274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.932 [2024-07-20 15:54:56.708294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:21.932 [2024-07-20 15:54:56.708313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:19:21.932 [2024-07-20 15:54:56.708330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.932 [2024-07-20 15:54:56.708642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.932 [2024-07-20 15:54:56.708662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:21.932 [2024-07-20 15:54:56.708679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:19:21.932 [2024-07-20 15:54:56.708703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.932 [2024-07-20 15:54:56.713633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.932 [2024-07-20 15:54:56.713674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:21.932 [2024-07-20 15:54:56.713693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.911 ms 00:19:21.932 [2024-07-20 15:54:56.713709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.932 [2024-07-20 15:54:56.721948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.932 [2024-07-20 15:54:56.722006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:21.932 [2024-07-20 15:54:56.722021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.221 ms 00:19:21.932 [2024-07-20 15:54:56.722037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.932 [2024-07-20 15:54:56.723775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.932 [2024-07-20 15:54:56.723824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:21.932 [2024-07-20 15:54:56.723839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:19:21.932 [2024-07-20 15:54:56.723851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.192 [2024-07-20 15:54:56.728174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.192 [2024-07-20 15:54:56.728220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:22.192 [2024-07-20 15:54:56.728234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.293 ms 00:19:22.192 [2024-07-20 15:54:56.728246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.192 [2024-07-20 15:54:56.728374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.192 [2024-07-20 15:54:56.728389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:22.192 [2024-07-20 15:54:56.728401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:22.192 [2024-07-20 15:54:56.728416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.192 [2024-07-20 15:54:56.730638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.192 [2024-07-20 15:54:56.730680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:22.192 [2024-07-20 15:54:56.730693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.206 ms 00:19:22.192 [2024-07-20 15:54:56.730704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.192 [2024-07-20 15:54:56.732326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.192 [2024-07-20 15:54:56.732379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:22.192 [2024-07-20 15:54:56.732392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.593 ms 00:19:22.192 [2024-07-20 15:54:56.732401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.192 [2024-07-20 15:54:56.733727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.192 [2024-07-20 15:54:56.733766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:22.192 [2024-07-20 15:54:56.733777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:19:22.192 [2024-07-20 15:54:56.733786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.192 [2024-07-20 15:54:56.735316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.192 [2024-07-20 15:54:56.735352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:22.192 [2024-07-20 15:54:56.735376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:19:22.192 [2024-07-20 15:54:56.735386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.192 [2024-07-20 15:54:56.735416] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:22.192 [2024-07-20 15:54:56.735432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.735993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:22.192 [2024-07-20 15:54:56.736168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:22.193 [2024-07-20 15:54:56.736522] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:22.193 [2024-07-20 15:54:56.736532] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3b248676-7b24-4869-bea0-4b1e4fa616c3 00:19:22.193 [2024-07-20 15:54:56.736551] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:22.193 [2024-07-20 15:54:56.736561] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:22.193 [2024-07-20 15:54:56.736577] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:22.193 [2024-07-20 15:54:56.736594] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:22.193 [2024-07-20 15:54:56.736604] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:22.193 [2024-07-20 15:54:56.736615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:22.193 [2024-07-20 15:54:56.736629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:22.193 [2024-07-20 15:54:56.736639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:22.193 [2024-07-20 15:54:56.736648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:22.193 [2024-07-20 15:54:56.736658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.193 [2024-07-20 15:54:56.736668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:22.193 [2024-07-20 15:54:56.736678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.244 ms 00:19:22.193 [2024-07-20 15:54:56.736688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.738640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.193 [2024-07-20 15:54:56.738671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:22.193 [2024-07-20 15:54:56.738683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.936 ms 00:19:22.193 [2024-07-20 15:54:56.738693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.738814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.193 [2024-07-20 15:54:56.738832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:22.193 [2024-07-20 15:54:56.738843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:22.193 [2024-07-20 15:54:56.738852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.746036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.193 [2024-07-20 15:54:56.746066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.193 [2024-07-20 15:54:56.746078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.193 [2024-07-20 15:54:56.746113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.746164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.193 [2024-07-20 15:54:56.746174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.193 [2024-07-20 15:54:56.746185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.193 [2024-07-20 15:54:56.746195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.746306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.193 [2024-07-20 15:54:56.746321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.193 [2024-07-20 15:54:56.746332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.193 [2024-07-20 15:54:56.746343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.746393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.193 [2024-07-20 15:54:56.746404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.193 [2024-07-20 15:54:56.746423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.193 [2024-07-20 15:54:56.746432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.758674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.193 [2024-07-20 15:54:56.758716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.193 [2024-07-20 15:54:56.758745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.193 [2024-07-20 15:54:56.758755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.767558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.193 [2024-07-20 15:54:56.767592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.193 [2024-07-20 15:54:56.767621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.193 [2024-07-20 15:54:56.767632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.767683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.193 [2024-07-20 15:54:56.767695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.193 [2024-07-20 15:54:56.767705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.193 [2024-07-20 15:54:56.767715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.767742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.193 [2024-07-20 15:54:56.767759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.193 [2024-07-20 15:54:56.767777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.193 [2024-07-20 15:54:56.767787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.767863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.193 [2024-07-20 15:54:56.767875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.193 [2024-07-20 15:54:56.767885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.193 [2024-07-20 15:54:56.767894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.767927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.193 [2024-07-20 15:54:56.767938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:22.193 [2024-07-20 15:54:56.767952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.193 [2024-07-20 15:54:56.767977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.768015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.193 [2024-07-20 15:54:56.768027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.193 [2024-07-20 15:54:56.768037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.193 [2024-07-20 15:54:56.768046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.768095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.193 [2024-07-20 15:54:56.768110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.193 [2024-07-20 15:54:56.768122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.193 [2024-07-20 15:54:56.768131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.193 [2024-07-20 15:54:56.768251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.110 ms, result 0 00:19:22.451 00:19:22.451 00:19:22.451 15:54:57 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:24.353 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:24.353 15:54:58 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:24.353 [2024-07-20 15:54:58.736392] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:24.353 [2024-07-20 15:54:58.736546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90464 ] 00:19:24.353 [2024-07-20 15:54:58.886325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.353 [2024-07-20 15:54:58.938919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.353 [2024-07-20 15:54:59.042667] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:24.353 [2024-07-20 15:54:59.042730] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:24.614 [2024-07-20 15:54:59.192980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.614 [2024-07-20 15:54:59.193030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:24.614 [2024-07-20 15:54:59.193045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:24.614 [2024-07-20 15:54:59.193055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.614 [2024-07-20 15:54:59.193120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.614 [2024-07-20 15:54:59.193134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.614 [2024-07-20 15:54:59.193145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:24.614 [2024-07-20 15:54:59.193158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.614 [2024-07-20 15:54:59.193179] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:24.614 [2024-07-20 15:54:59.193471] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:24.614 [2024-07-20 15:54:59.193492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.614 [2024-07-20 15:54:59.193505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.614 [2024-07-20 15:54:59.193515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:19:24.614 [2024-07-20 15:54:59.193525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.614 [2024-07-20 15:54:59.194929] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:24.614 [2024-07-20 15:54:59.197415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.614 [2024-07-20 15:54:59.197449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:24.614 [2024-07-20 15:54:59.197483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.492 ms 00:19:24.614 [2024-07-20 15:54:59.197493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.614 [2024-07-20 15:54:59.197549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.614 [2024-07-20 15:54:59.197568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:24.614 [2024-07-20 15:54:59.197579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:24.614 [2024-07-20 15:54:59.197596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.614 [2024-07-20 15:54:59.204359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.614 [2024-07-20 15:54:59.204431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.614 [2024-07-20 15:54:59.204459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.710 ms 00:19:24.614 [2024-07-20 15:54:59.204475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.614 [2024-07-20 15:54:59.204562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.614 [2024-07-20 15:54:59.204574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.614 [2024-07-20 15:54:59.204585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:24.614 [2024-07-20 15:54:59.204594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.614 [2024-07-20 15:54:59.204648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.614 [2024-07-20 15:54:59.204660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:24.614 [2024-07-20 15:54:59.204677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:24.614 [2024-07-20 15:54:59.204693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.614 [2024-07-20 15:54:59.204720] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:24.614 [2024-07-20 15:54:59.206349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.614 [2024-07-20 15:54:59.206389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.614 [2024-07-20 15:54:59.206400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:19:24.614 [2024-07-20 15:54:59.206410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.614 [2024-07-20 15:54:59.206442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.614 [2024-07-20 15:54:59.206453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:24.614 [2024-07-20 15:54:59.206475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:24.615 [2024-07-20 15:54:59.206485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.615 [2024-07-20 15:54:59.206506] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:24.615 [2024-07-20 15:54:59.206529] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:24.615 [2024-07-20 15:54:59.206570] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:24.615 [2024-07-20 15:54:59.206593] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:24.615 [2024-07-20 15:54:59.206675] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:24.615 [2024-07-20 15:54:59.206692] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:24.615 [2024-07-20 15:54:59.206707] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:24.615 [2024-07-20 15:54:59.206721] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:24.615 [2024-07-20 15:54:59.206732] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:24.615 [2024-07-20 15:54:59.206749] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:24.615 [2024-07-20 15:54:59.206759] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:24.615 [2024-07-20 15:54:59.206769] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:24.615 [2024-07-20 15:54:59.206779] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:24.615 [2024-07-20 15:54:59.206789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.615 [2024-07-20 15:54:59.206799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:24.615 [2024-07-20 15:54:59.206816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:19:24.615 [2024-07-20 15:54:59.206829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.615 [2024-07-20 15:54:59.206902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.615 [2024-07-20 15:54:59.206912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:24.615 [2024-07-20 15:54:59.206922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:24.615 [2024-07-20 15:54:59.206932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.615 [2024-07-20 15:54:59.207012] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:24.615 [2024-07-20 15:54:59.207024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:24.615 [2024-07-20 15:54:59.207035] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.615 [2024-07-20 15:54:59.207045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.615 [2024-07-20 15:54:59.207059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:24.615 [2024-07-20 15:54:59.207068] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:24.615 [2024-07-20 15:54:59.207078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:24.615 [2024-07-20 15:54:59.207087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:24.615 [2024-07-20 15:54:59.207096] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:24.615 [2024-07-20 15:54:59.207105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.615 [2024-07-20 15:54:59.207115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:24.615 [2024-07-20 15:54:59.207124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:24.615 [2024-07-20 15:54:59.207133] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.615 [2024-07-20 15:54:59.207142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:24.615 [2024-07-20 15:54:59.207152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:24.615 [2024-07-20 15:54:59.207161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.615 [2024-07-20 15:54:59.207173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:24.615 [2024-07-20 15:54:59.207183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:24.615 [2024-07-20 15:54:59.207191] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.615 [2024-07-20 15:54:59.207200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:24.615 [2024-07-20 15:54:59.207209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:24.615 [2024-07-20 15:54:59.207219] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.615 [2024-07-20 15:54:59.207228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:24.615 [2024-07-20 15:54:59.207237] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:24.615 [2024-07-20 15:54:59.207246] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.615 [2024-07-20 15:54:59.207255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:24.615 [2024-07-20 15:54:59.207264] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:24.615 [2024-07-20 15:54:59.207273] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.615 [2024-07-20 15:54:59.207281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:24.615 [2024-07-20 15:54:59.207290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:24.615 [2024-07-20 15:54:59.207299] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.615 [2024-07-20 15:54:59.207308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:24.615 [2024-07-20 15:54:59.207323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:24.615 [2024-07-20 15:54:59.207332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.615 [2024-07-20 15:54:59.207341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:24.615 [2024-07-20 15:54:59.207350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:24.615 [2024-07-20 15:54:59.207371] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.615 [2024-07-20 15:54:59.207380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:24.615 [2024-07-20 15:54:59.207389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:24.615 [2024-07-20 15:54:59.207398] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.615 [2024-07-20 15:54:59.207408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:24.615 [2024-07-20 15:54:59.207417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:24.615 [2024-07-20 15:54:59.207427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.615 [2024-07-20 15:54:59.207436] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:24.615 [2024-07-20 15:54:59.207445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:24.615 [2024-07-20 15:54:59.207455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.615 [2024-07-20 15:54:59.207465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.615 [2024-07-20 15:54:59.207474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:24.615 [2024-07-20 15:54:59.207486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:24.615 [2024-07-20 15:54:59.207496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:24.615 [2024-07-20 15:54:59.207505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:24.615 [2024-07-20 15:54:59.207514] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:24.615 [2024-07-20 15:54:59.207523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:24.615 [2024-07-20 15:54:59.207533] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:24.615 [2024-07-20 15:54:59.207545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.615 [2024-07-20 15:54:59.207556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:24.615 [2024-07-20 15:54:59.207567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:24.615 [2024-07-20 15:54:59.207577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:24.615 [2024-07-20 15:54:59.207587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:24.615 [2024-07-20 15:54:59.207597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:24.615 [2024-07-20 15:54:59.207606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:24.615 [2024-07-20 15:54:59.207616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:24.615 [2024-07-20 15:54:59.207627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:24.615 [2024-07-20 15:54:59.207637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:24.615 [2024-07-20 15:54:59.207649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:24.615 [2024-07-20 15:54:59.207660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:24.615 [2024-07-20 15:54:59.207670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:24.615 [2024-07-20 15:54:59.207680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:24.615 [2024-07-20 15:54:59.207690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:24.615 [2024-07-20 15:54:59.207701] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:24.615 [2024-07-20 15:54:59.207711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.615 [2024-07-20 15:54:59.207722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:24.615 [2024-07-20 15:54:59.207732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:24.615 [2024-07-20 15:54:59.207751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:24.615 [2024-07-20 15:54:59.207762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:24.615 [2024-07-20 15:54:59.207773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.615 [2024-07-20 15:54:59.207783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:24.615 [2024-07-20 15:54:59.207793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:19:24.615 [2024-07-20 15:54:59.207806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.615 [2024-07-20 15:54:59.232541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.615 [2024-07-20 15:54:59.232639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:24.615 [2024-07-20 15:54:59.232684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.718 ms 00:19:24.615 [2024-07-20 15:54:59.232719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.615 [2024-07-20 15:54:59.232965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.615 [2024-07-20 15:54:59.233002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:24.615 [2024-07-20 15:54:59.233037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:19:24.615 [2024-07-20 15:54:59.233082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.615 [2024-07-20 15:54:59.250610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.615 [2024-07-20 15:54:59.250670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.615 [2024-07-20 15:54:59.250714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.384 ms 00:19:24.615 [2024-07-20 15:54:59.250736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.615 [2024-07-20 15:54:59.250797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.615 [2024-07-20 15:54:59.250819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.615 [2024-07-20 15:54:59.250840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:24.615 [2024-07-20 15:54:59.250868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.615 [2024-07-20 15:54:59.251514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.615 [2024-07-20 15:54:59.251543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.615 [2024-07-20 15:54:59.251565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:19:24.615 [2024-07-20 15:54:59.251586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.615 [2024-07-20 15:54:59.251803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.615 [2024-07-20 15:54:59.251834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.615 [2024-07-20 15:54:59.251857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:19:24.615 [2024-07-20 15:54:59.251877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.615 [2024-07-20 15:54:59.259268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.615 [2024-07-20 15:54:59.259319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.615 [2024-07-20 15:54:59.259337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.358 ms 00:19:24.615 [2024-07-20 15:54:59.259351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.615 [2024-07-20 15:54:59.262562] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:24.615 [2024-07-20 15:54:59.262607] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:24.615 [2024-07-20 15:54:59.262633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.615 [2024-07-20 15:54:59.262648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:24.615 [2024-07-20 15:54:59.262663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.159 ms 00:19:24.615 [2024-07-20 15:54:59.262676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.615 [2024-07-20 15:54:59.275256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.615 [2024-07-20 15:54:59.275306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:24.615 [2024-07-20 15:54:59.275335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.547 ms 00:19:24.615 [2024-07-20 15:54:59.275345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.616 [2024-07-20 15:54:59.277023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.616 [2024-07-20 15:54:59.277055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:24.616 [2024-07-20 15:54:59.277067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:19:24.616 [2024-07-20 15:54:59.277077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.616 [2024-07-20 15:54:59.278523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.616 [2024-07-20 15:54:59.278554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:24.616 [2024-07-20 15:54:59.278566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.414 ms 00:19:24.616 [2024-07-20 15:54:59.278575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.616 [2024-07-20 15:54:59.278863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.616 [2024-07-20 15:54:59.278879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:24.616 [2024-07-20 15:54:59.278890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:19:24.616 [2024-07-20 15:54:59.278901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.616 [2024-07-20 15:54:59.298855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.616 [2024-07-20 15:54:59.298931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:24.616 [2024-07-20 15:54:59.298948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.959 ms 00:19:24.616 [2024-07-20 15:54:59.298958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.616 [2024-07-20 15:54:59.304877] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:24.616 [2024-07-20 15:54:59.307291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.616 [2024-07-20 15:54:59.307320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:24.616 [2024-07-20 15:54:59.307349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.295 ms 00:19:24.616 [2024-07-20 15:54:59.307359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.616 [2024-07-20 15:54:59.307414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.616 [2024-07-20 15:54:59.307427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:24.616 [2024-07-20 15:54:59.307438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:24.616 [2024-07-20 15:54:59.307456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.616 [2024-07-20 15:54:59.307532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.616 [2024-07-20 15:54:59.307547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:24.616 [2024-07-20 15:54:59.307562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:24.616 [2024-07-20 15:54:59.307571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.616 [2024-07-20 15:54:59.307602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.616 [2024-07-20 15:54:59.307612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:24.616 [2024-07-20 15:54:59.307622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:24.616 [2024-07-20 15:54:59.307631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.616 [2024-07-20 15:54:59.307687] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:24.616 [2024-07-20 15:54:59.307699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.616 [2024-07-20 15:54:59.307709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:24.616 [2024-07-20 15:54:59.307731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:24.616 [2024-07-20 15:54:59.307741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.616 [2024-07-20 15:54:59.311191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.616 [2024-07-20 15:54:59.311227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:24.616 [2024-07-20 15:54:59.311240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.436 ms 00:19:24.616 [2024-07-20 15:54:59.311251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.616 [2024-07-20 15:54:59.311314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.616 [2024-07-20 15:54:59.311325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:24.616 [2024-07-20 15:54:59.311346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:24.616 [2024-07-20 15:54:59.311371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.616 [2024-07-20 15:54:59.312430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 119.207 ms, result 0 00:20:05.482  Copying: 26/1024 [MB] (26 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 79/1024 [MB] (26 MBps) Copying: 104/1024 [MB] (25 MBps) Copying: 130/1024 [MB] (25 MBps) Copying: 156/1024 [MB] (25 MBps) Copying: 181/1024 [MB] (25 MBps) Copying: 207/1024 [MB] (25 MBps) Copying: 232/1024 [MB] (25 MBps) Copying: 258/1024 [MB] (25 MBps) Copying: 285/1024 [MB] (26 MBps) Copying: 311/1024 [MB] (26 MBps) Copying: 337/1024 [MB] (26 MBps) Copying: 362/1024 [MB] (25 MBps) Copying: 388/1024 [MB] (25 MBps) Copying: 412/1024 [MB] (24 MBps) Copying: 438/1024 [MB] (25 MBps) Copying: 462/1024 [MB] (24 MBps) Copying: 487/1024 [MB] (24 MBps) Copying: 513/1024 [MB] (25 MBps) Copying: 539/1024 [MB] (25 MBps) Copying: 564/1024 [MB] (25 MBps) Copying: 590/1024 [MB] (25 MBps) Copying: 615/1024 [MB] (25 MBps) Copying: 640/1024 [MB] (24 MBps) Copying: 665/1024 [MB] (25 MBps) Copying: 691/1024 [MB] (25 MBps) Copying: 716/1024 [MB] (25 MBps) Copying: 740/1024 [MB] (24 MBps) Copying: 765/1024 [MB] (24 MBps) Copying: 791/1024 [MB] (25 MBps) Copying: 817/1024 [MB] (26 MBps) Copying: 842/1024 [MB] (25 MBps) Copying: 868/1024 [MB] (25 MBps) Copying: 894/1024 [MB] (25 MBps) Copying: 919/1024 [MB] (25 MBps) Copying: 944/1024 [MB] (25 MBps) Copying: 969/1024 [MB] (24 MBps) Copying: 993/1024 [MB] (24 MBps) Copying: 1019/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-20 15:55:40.216799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.482 [2024-07-20 15:55:40.216853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:05.482 [2024-07-20 15:55:40.216870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:05.482 [2024-07-20 15:55:40.216895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.482 [2024-07-20 15:55:40.219380] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:05.482 [2024-07-20 15:55:40.221876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.482 [2024-07-20 15:55:40.221912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:05.482 [2024-07-20 15:55:40.221931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.455 ms 00:20:05.482 [2024-07-20 15:55:40.221942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.482 [2024-07-20 15:55:40.231660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.482 [2024-07-20 15:55:40.231698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:05.482 [2024-07-20 15:55:40.231711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.151 ms 00:20:05.482 [2024-07-20 15:55:40.231720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.482 [2024-07-20 15:55:40.254911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.482 [2024-07-20 15:55:40.254953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:05.482 [2024-07-20 15:55:40.254980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.210 ms 00:20:05.482 [2024-07-20 15:55:40.254996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.482 [2024-07-20 15:55:40.260132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.482 [2024-07-20 15:55:40.260163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:05.482 [2024-07-20 15:55:40.260184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.092 ms 00:20:05.482 [2024-07-20 15:55:40.260194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.482 [2024-07-20 15:55:40.261738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.482 [2024-07-20 15:55:40.261774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:05.482 [2024-07-20 15:55:40.261786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:20:05.482 [2024-07-20 15:55:40.261795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.482 [2024-07-20 15:55:40.265263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.482 [2024-07-20 15:55:40.265302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:05.482 [2024-07-20 15:55:40.265315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.446 ms 00:20:05.482 [2024-07-20 15:55:40.265331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.743 [2024-07-20 15:55:40.390637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.743 [2024-07-20 15:55:40.390688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:05.743 [2024-07-20 15:55:40.390703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 125.466 ms 00:20:05.743 [2024-07-20 15:55:40.390724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.743 [2024-07-20 15:55:40.392686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.743 [2024-07-20 15:55:40.392717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:05.743 [2024-07-20 15:55:40.392745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.947 ms 00:20:05.743 [2024-07-20 15:55:40.392755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.743 [2024-07-20 15:55:40.394206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.743 [2024-07-20 15:55:40.394249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:05.743 [2024-07-20 15:55:40.394260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:20:05.743 [2024-07-20 15:55:40.394270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.743 [2024-07-20 15:55:40.395480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.743 [2024-07-20 15:55:40.395512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:05.743 [2024-07-20 15:55:40.395523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:20:05.743 [2024-07-20 15:55:40.395533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.743 [2024-07-20 15:55:40.396661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.743 [2024-07-20 15:55:40.396694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:05.743 [2024-07-20 15:55:40.396706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:20:05.743 [2024-07-20 15:55:40.396716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.743 [2024-07-20 15:55:40.396741] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:05.743 [2024-07-20 15:55:40.396756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115200 / 261120 wr_cnt: 1 state: open 00:20:05.743 [2024-07-20 15:55:40.396779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.396992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.397002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.397012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.397023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.397033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.397043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.397053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.397064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.397074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.397085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.397228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.397239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.397250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:05.743 [2024-07-20 15:55:40.397262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:05.744 [2024-07-20 15:55:40.397998] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:05.744 [2024-07-20 15:55:40.398008] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3b248676-7b24-4869-bea0-4b1e4fa616c3 00:20:05.744 [2024-07-20 15:55:40.398019] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115200 00:20:05.744 [2024-07-20 15:55:40.398032] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116160 00:20:05.744 [2024-07-20 15:55:40.398042] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115200 00:20:05.744 [2024-07-20 15:55:40.398052] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:20:05.744 [2024-07-20 15:55:40.398062] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:05.744 [2024-07-20 15:55:40.398072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:05.744 [2024-07-20 15:55:40.398083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:05.744 [2024-07-20 15:55:40.398092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:05.744 [2024-07-20 15:55:40.398101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:05.744 [2024-07-20 15:55:40.398111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.744 [2024-07-20 15:55:40.398121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:05.744 [2024-07-20 15:55:40.398131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.373 ms 00:20:05.744 [2024-07-20 15:55:40.398141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.744 [2024-07-20 15:55:40.399816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.744 [2024-07-20 15:55:40.399841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:05.744 [2024-07-20 15:55:40.399852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.661 ms 00:20:05.744 [2024-07-20 15:55:40.399862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.744 [2024-07-20 15:55:40.399963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.744 [2024-07-20 15:55:40.399974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:05.744 [2024-07-20 15:55:40.399985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:05.744 [2024-07-20 15:55:40.400010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.744 [2024-07-20 15:55:40.405920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.744 [2024-07-20 15:55:40.405944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:05.744 [2024-07-20 15:55:40.405955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.744 [2024-07-20 15:55:40.405965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.744 [2024-07-20 15:55:40.406008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.744 [2024-07-20 15:55:40.406019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:05.744 [2024-07-20 15:55:40.406029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.744 [2024-07-20 15:55:40.406043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.744 [2024-07-20 15:55:40.406100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.744 [2024-07-20 15:55:40.406113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:05.744 [2024-07-20 15:55:40.406123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.744 [2024-07-20 15:55:40.406133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.744 [2024-07-20 15:55:40.406148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.744 [2024-07-20 15:55:40.406158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:05.745 [2024-07-20 15:55:40.406168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.745 [2024-07-20 15:55:40.406177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.745 [2024-07-20 15:55:40.417041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.745 [2024-07-20 15:55:40.417085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:05.745 [2024-07-20 15:55:40.417123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.745 [2024-07-20 15:55:40.417134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.745 [2024-07-20 15:55:40.425188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.745 [2024-07-20 15:55:40.425233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.745 [2024-07-20 15:55:40.425245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.745 [2024-07-20 15:55:40.425255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.745 [2024-07-20 15:55:40.425316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.745 [2024-07-20 15:55:40.425327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.745 [2024-07-20 15:55:40.425337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.745 [2024-07-20 15:55:40.425347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.745 [2024-07-20 15:55:40.425386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.745 [2024-07-20 15:55:40.425400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.745 [2024-07-20 15:55:40.425411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.745 [2024-07-20 15:55:40.425427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.745 [2024-07-20 15:55:40.425508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.745 [2024-07-20 15:55:40.425521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.745 [2024-07-20 15:55:40.425531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.745 [2024-07-20 15:55:40.425546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.745 [2024-07-20 15:55:40.425580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.745 [2024-07-20 15:55:40.425592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:05.745 [2024-07-20 15:55:40.425601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.745 [2024-07-20 15:55:40.425611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.745 [2024-07-20 15:55:40.425662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.745 [2024-07-20 15:55:40.425677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.745 [2024-07-20 15:55:40.425687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.745 [2024-07-20 15:55:40.425697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.745 [2024-07-20 15:55:40.425738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.745 [2024-07-20 15:55:40.425750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.745 [2024-07-20 15:55:40.425760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.745 [2024-07-20 15:55:40.425770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.745 [2024-07-20 15:55:40.425885] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 211.006 ms, result 0 00:20:06.313 00:20:06.313 00:20:06.313 15:55:41 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:06.572 [2024-07-20 15:55:41.178887] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:06.573 [2024-07-20 15:55:41.179027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90898 ] 00:20:06.573 [2024-07-20 15:55:41.326700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.573 [2024-07-20 15:55:41.367009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.832 [2024-07-20 15:55:41.467078] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.832 [2024-07-20 15:55:41.467174] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.832 [2024-07-20 15:55:41.617865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.832 [2024-07-20 15:55:41.617909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:06.832 [2024-07-20 15:55:41.617924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:06.832 [2024-07-20 15:55:41.617933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.832 [2024-07-20 15:55:41.618006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.832 [2024-07-20 15:55:41.618019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.832 [2024-07-20 15:55:41.618029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:06.832 [2024-07-20 15:55:41.618042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.832 [2024-07-20 15:55:41.618064] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:06.832 [2024-07-20 15:55:41.618389] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:06.832 [2024-07-20 15:55:41.618411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.832 [2024-07-20 15:55:41.618424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.832 [2024-07-20 15:55:41.618435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:20:06.832 [2024-07-20 15:55:41.618444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.832 [2024-07-20 15:55:41.619828] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:06.832 [2024-07-20 15:55:41.622368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.832 [2024-07-20 15:55:41.622402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:06.832 [2024-07-20 15:55:41.622427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:20:06.832 [2024-07-20 15:55:41.622440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.832 [2024-07-20 15:55:41.622502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.832 [2024-07-20 15:55:41.622520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:06.832 [2024-07-20 15:55:41.622531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:06.832 [2024-07-20 15:55:41.622541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.093 [2024-07-20 15:55:41.629072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.093 [2024-07-20 15:55:41.629112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:07.093 [2024-07-20 15:55:41.629139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.493 ms 00:20:07.093 [2024-07-20 15:55:41.629149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.093 [2024-07-20 15:55:41.629242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.093 [2024-07-20 15:55:41.629258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:07.093 [2024-07-20 15:55:41.629269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:07.093 [2024-07-20 15:55:41.629278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.093 [2024-07-20 15:55:41.629334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.093 [2024-07-20 15:55:41.629345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:07.093 [2024-07-20 15:55:41.629364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:07.093 [2024-07-20 15:55:41.629390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.093 [2024-07-20 15:55:41.629421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:07.093 [2024-07-20 15:55:41.631006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.093 [2024-07-20 15:55:41.631033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:07.093 [2024-07-20 15:55:41.631044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.592 ms 00:20:07.093 [2024-07-20 15:55:41.631054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.093 [2024-07-20 15:55:41.631085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.093 [2024-07-20 15:55:41.631096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:07.093 [2024-07-20 15:55:41.631111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:07.093 [2024-07-20 15:55:41.631121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.093 [2024-07-20 15:55:41.631143] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:07.093 [2024-07-20 15:55:41.631173] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:07.093 [2024-07-20 15:55:41.631213] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:07.093 [2024-07-20 15:55:41.631232] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:07.093 [2024-07-20 15:55:41.631315] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:07.093 [2024-07-20 15:55:41.631331] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:07.093 [2024-07-20 15:55:41.631344] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:07.093 [2024-07-20 15:55:41.631389] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:07.093 [2024-07-20 15:55:41.631402] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:07.093 [2024-07-20 15:55:41.631412] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:07.093 [2024-07-20 15:55:41.631422] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:07.093 [2024-07-20 15:55:41.631439] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:07.093 [2024-07-20 15:55:41.631448] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:07.093 [2024-07-20 15:55:41.631458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.093 [2024-07-20 15:55:41.631481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:07.093 [2024-07-20 15:55:41.631491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:20:07.093 [2024-07-20 15:55:41.631504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.093 [2024-07-20 15:55:41.631571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.093 [2024-07-20 15:55:41.631581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:07.093 [2024-07-20 15:55:41.631598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:07.093 [2024-07-20 15:55:41.631607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.093 [2024-07-20 15:55:41.631697] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:07.093 [2024-07-20 15:55:41.631710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:07.093 [2024-07-20 15:55:41.631720] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:07.093 [2024-07-20 15:55:41.631737] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.093 [2024-07-20 15:55:41.631749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:07.093 [2024-07-20 15:55:41.631758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:07.093 [2024-07-20 15:55:41.631768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:07.093 [2024-07-20 15:55:41.631777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:07.093 [2024-07-20 15:55:41.631789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:07.093 [2024-07-20 15:55:41.631798] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:07.093 [2024-07-20 15:55:41.631807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:07.093 [2024-07-20 15:55:41.631816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:07.093 [2024-07-20 15:55:41.631827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:07.093 [2024-07-20 15:55:41.631836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:07.093 [2024-07-20 15:55:41.631846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:07.093 [2024-07-20 15:55:41.631855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.093 [2024-07-20 15:55:41.631864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:07.093 [2024-07-20 15:55:41.631873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:07.093 [2024-07-20 15:55:41.631883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.093 [2024-07-20 15:55:41.631892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:07.093 [2024-07-20 15:55:41.631901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:07.093 [2024-07-20 15:55:41.631910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:07.093 [2024-07-20 15:55:41.631918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:07.093 [2024-07-20 15:55:41.631927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:07.093 [2024-07-20 15:55:41.631939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:07.093 [2024-07-20 15:55:41.631948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:07.093 [2024-07-20 15:55:41.631958] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:07.094 [2024-07-20 15:55:41.631966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:07.094 [2024-07-20 15:55:41.631975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:07.094 [2024-07-20 15:55:41.631984] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:07.094 [2024-07-20 15:55:41.631993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:07.094 [2024-07-20 15:55:41.632001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:07.094 [2024-07-20 15:55:41.632010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:07.094 [2024-07-20 15:55:41.632018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:07.094 [2024-07-20 15:55:41.632027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:07.094 [2024-07-20 15:55:41.632036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:07.094 [2024-07-20 15:55:41.632045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:07.094 [2024-07-20 15:55:41.632054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:07.094 [2024-07-20 15:55:41.632062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:07.094 [2024-07-20 15:55:41.632071] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.094 [2024-07-20 15:55:41.632082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:07.094 [2024-07-20 15:55:41.632092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:07.094 [2024-07-20 15:55:41.632102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.094 [2024-07-20 15:55:41.632110] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:07.094 [2024-07-20 15:55:41.632120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:07.094 [2024-07-20 15:55:41.632129] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:07.094 [2024-07-20 15:55:41.632138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:07.094 [2024-07-20 15:55:41.632148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:07.094 [2024-07-20 15:55:41.632157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:07.094 [2024-07-20 15:55:41.632166] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:07.094 [2024-07-20 15:55:41.632175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:07.094 [2024-07-20 15:55:41.632184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:07.094 [2024-07-20 15:55:41.632193] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:07.094 [2024-07-20 15:55:41.632203] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:07.094 [2024-07-20 15:55:41.632221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:07.094 [2024-07-20 15:55:41.632232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:07.094 [2024-07-20 15:55:41.632245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:07.094 [2024-07-20 15:55:41.632255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:07.094 [2024-07-20 15:55:41.632264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:07.094 [2024-07-20 15:55:41.632274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:07.094 [2024-07-20 15:55:41.632284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:07.094 [2024-07-20 15:55:41.632295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:07.094 [2024-07-20 15:55:41.632305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:07.094 [2024-07-20 15:55:41.632315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:07.094 [2024-07-20 15:55:41.632325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:07.094 [2024-07-20 15:55:41.632335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:07.094 [2024-07-20 15:55:41.632345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:07.094 [2024-07-20 15:55:41.632370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:07.094 [2024-07-20 15:55:41.632381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:07.094 [2024-07-20 15:55:41.632392] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:07.094 [2024-07-20 15:55:41.632403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:07.094 [2024-07-20 15:55:41.632414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:07.094 [2024-07-20 15:55:41.632427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:07.094 [2024-07-20 15:55:41.632445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:07.094 [2024-07-20 15:55:41.632455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:07.094 [2024-07-20 15:55:41.632466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.094 [2024-07-20 15:55:41.632477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:07.094 [2024-07-20 15:55:41.632487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:20:07.094 [2024-07-20 15:55:41.632500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.094 [2024-07-20 15:55:41.655160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.094 [2024-07-20 15:55:41.655192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:07.094 [2024-07-20 15:55:41.655220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.653 ms 00:20:07.094 [2024-07-20 15:55:41.655230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.094 [2024-07-20 15:55:41.655304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.094 [2024-07-20 15:55:41.655315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:07.094 [2024-07-20 15:55:41.655325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:07.094 [2024-07-20 15:55:41.655335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.094 [2024-07-20 15:55:41.666210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.094 [2024-07-20 15:55:41.666251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:07.094 [2024-07-20 15:55:41.666279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.818 ms 00:20:07.094 [2024-07-20 15:55:41.666297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.094 [2024-07-20 15:55:41.666329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.094 [2024-07-20 15:55:41.666347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:07.094 [2024-07-20 15:55:41.666360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:07.094 [2024-07-20 15:55:41.666394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.094 [2024-07-20 15:55:41.666858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.094 [2024-07-20 15:55:41.666878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:07.094 [2024-07-20 15:55:41.666889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:20:07.094 [2024-07-20 15:55:41.666906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.094 [2024-07-20 15:55:41.667026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.094 [2024-07-20 15:55:41.667038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:07.094 [2024-07-20 15:55:41.667060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:07.094 [2024-07-20 15:55:41.667069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.094 [2024-07-20 15:55:41.672845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.094 [2024-07-20 15:55:41.672874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:07.094 [2024-07-20 15:55:41.672886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.760 ms 00:20:07.094 [2024-07-20 15:55:41.672895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.094 [2024-07-20 15:55:41.675531] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:07.094 [2024-07-20 15:55:41.675575] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:07.094 [2024-07-20 15:55:41.675593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.094 [2024-07-20 15:55:41.675603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:07.094 [2024-07-20 15:55:41.675613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.603 ms 00:20:07.094 [2024-07-20 15:55:41.675622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.094 [2024-07-20 15:55:41.687468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.094 [2024-07-20 15:55:41.687501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:07.094 [2024-07-20 15:55:41.687515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.811 ms 00:20:07.094 [2024-07-20 15:55:41.687525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.094 [2024-07-20 15:55:41.689153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.094 [2024-07-20 15:55:41.689185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:07.095 [2024-07-20 15:55:41.689196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.575 ms 00:20:07.095 [2024-07-20 15:55:41.689205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.095 [2024-07-20 15:55:41.690621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.095 [2024-07-20 15:55:41.690652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:07.095 [2024-07-20 15:55:41.690664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:20:07.095 [2024-07-20 15:55:41.690678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.095 [2024-07-20 15:55:41.690942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.095 [2024-07-20 15:55:41.690964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:07.095 [2024-07-20 15:55:41.690975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:20:07.095 [2024-07-20 15:55:41.690985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.095 [2024-07-20 15:55:41.710396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.095 [2024-07-20 15:55:41.710457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:07.095 [2024-07-20 15:55:41.710472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.410 ms 00:20:07.095 [2024-07-20 15:55:41.710482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.095 [2024-07-20 15:55:41.716378] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:07.095 [2024-07-20 15:55:41.718814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.095 [2024-07-20 15:55:41.718843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:07.095 [2024-07-20 15:55:41.718874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.287 ms 00:20:07.095 [2024-07-20 15:55:41.718884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.095 [2024-07-20 15:55:41.718936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.095 [2024-07-20 15:55:41.718948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:07.095 [2024-07-20 15:55:41.718959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:07.095 [2024-07-20 15:55:41.718969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.095 [2024-07-20 15:55:41.720627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.095 [2024-07-20 15:55:41.720664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:07.095 [2024-07-20 15:55:41.720680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.617 ms 00:20:07.095 [2024-07-20 15:55:41.720691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.095 [2024-07-20 15:55:41.720719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.095 [2024-07-20 15:55:41.720730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:07.095 [2024-07-20 15:55:41.720739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:07.095 [2024-07-20 15:55:41.720749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.095 [2024-07-20 15:55:41.720784] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:07.095 [2024-07-20 15:55:41.720796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.095 [2024-07-20 15:55:41.720806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:07.095 [2024-07-20 15:55:41.720829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:07.095 [2024-07-20 15:55:41.720849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.095 [2024-07-20 15:55:41.724357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.095 [2024-07-20 15:55:41.724401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:07.095 [2024-07-20 15:55:41.724434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.480 ms 00:20:07.095 [2024-07-20 15:55:41.724444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.095 [2024-07-20 15:55:41.724506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.095 [2024-07-20 15:55:41.724518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:07.095 [2024-07-20 15:55:41.724529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:07.095 [2024-07-20 15:55:41.724542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.095 [2024-07-20 15:55:41.729396] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 110.699 ms, result 0 00:20:46.252  Copying: 25/1024 [MB] (25 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 78/1024 [MB] (25 MBps) Copying: 104/1024 [MB] (26 MBps) Copying: 131/1024 [MB] (26 MBps) Copying: 157/1024 [MB] (26 MBps) Copying: 183/1024 [MB] (26 MBps) Copying: 209/1024 [MB] (26 MBps) Copying: 237/1024 [MB] (27 MBps) Copying: 264/1024 [MB] (27 MBps) Copying: 292/1024 [MB] (27 MBps) Copying: 319/1024 [MB] (27 MBps) Copying: 346/1024 [MB] (27 MBps) Copying: 372/1024 [MB] (25 MBps) Copying: 399/1024 [MB] (26 MBps) Copying: 425/1024 [MB] (26 MBps) Copying: 451/1024 [MB] (25 MBps) Copying: 477/1024 [MB] (26 MBps) Copying: 503/1024 [MB] (25 MBps) Copying: 527/1024 [MB] (24 MBps) Copying: 554/1024 [MB] (26 MBps) Copying: 581/1024 [MB] (26 MBps) Copying: 607/1024 [MB] (26 MBps) Copying: 633/1024 [MB] (26 MBps) Copying: 659/1024 [MB] (25 MBps) Copying: 685/1024 [MB] (26 MBps) Copying: 711/1024 [MB] (26 MBps) Copying: 737/1024 [MB] (26 MBps) Copying: 765/1024 [MB] (27 MBps) Copying: 793/1024 [MB] (27 MBps) Copying: 819/1024 [MB] (26 MBps) Copying: 846/1024 [MB] (27 MBps) Copying: 872/1024 [MB] (26 MBps) Copying: 899/1024 [MB] (26 MBps) Copying: 926/1024 [MB] (27 MBps) Copying: 954/1024 [MB] (27 MBps) Copying: 981/1024 [MB] (27 MBps) Copying: 1008/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-20 15:56:20.891142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.252 [2024-07-20 15:56:20.891223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:46.252 [2024-07-20 15:56:20.891240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:46.252 [2024-07-20 15:56:20.891251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.252 [2024-07-20 15:56:20.891288] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:46.252 [2024-07-20 15:56:20.892229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.252 [2024-07-20 15:56:20.892251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:46.252 [2024-07-20 15:56:20.892269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:20:46.252 [2024-07-20 15:56:20.892279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.252 [2024-07-20 15:56:20.892506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.252 [2024-07-20 15:56:20.892525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:46.252 [2024-07-20 15:56:20.892537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:20:46.252 [2024-07-20 15:56:20.892548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.252 [2024-07-20 15:56:20.896903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.252 [2024-07-20 15:56:20.896947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:46.253 [2024-07-20 15:56:20.896960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.333 ms 00:20:46.253 [2024-07-20 15:56:20.896978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.253 [2024-07-20 15:56:20.903299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.253 [2024-07-20 15:56:20.903338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:46.253 [2024-07-20 15:56:20.903352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.289 ms 00:20:46.253 [2024-07-20 15:56:20.903371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.253 [2024-07-20 15:56:20.904715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.253 [2024-07-20 15:56:20.904754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:46.253 [2024-07-20 15:56:20.904766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:20:46.253 [2024-07-20 15:56:20.904776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.253 [2024-07-20 15:56:20.907871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.253 [2024-07-20 15:56:20.907910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:46.253 [2024-07-20 15:56:20.907922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.065 ms 00:20:46.253 [2024-07-20 15:56:20.907938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-07-20 15:56:21.054944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.514 [2024-07-20 15:56:21.055006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:46.514 [2024-07-20 15:56:21.055035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 147.206 ms 00:20:46.514 [2024-07-20 15:56:21.055045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-07-20 15:56:21.057467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.514 [2024-07-20 15:56:21.057502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:46.514 [2024-07-20 15:56:21.057514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.408 ms 00:20:46.514 [2024-07-20 15:56:21.057524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-07-20 15:56:21.059006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.514 [2024-07-20 15:56:21.059042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:46.514 [2024-07-20 15:56:21.059053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.456 ms 00:20:46.514 [2024-07-20 15:56:21.059062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-07-20 15:56:21.060255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.514 [2024-07-20 15:56:21.060288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:46.514 [2024-07-20 15:56:21.060299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:20:46.514 [2024-07-20 15:56:21.060308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-07-20 15:56:21.061405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.514 [2024-07-20 15:56:21.061437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:46.514 [2024-07-20 15:56:21.061448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:20:46.514 [2024-07-20 15:56:21.061457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.514 [2024-07-20 15:56:21.061483] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:46.514 [2024-07-20 15:56:21.061499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:20:46.514 [2024-07-20 15:56:21.061513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.061995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:46.514 [2024-07-20 15:56:21.062245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:46.515 [2024-07-20 15:56:21.062574] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:46.515 [2024-07-20 15:56:21.062584] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3b248676-7b24-4869-bea0-4b1e4fa616c3 00:20:46.515 [2024-07-20 15:56:21.062594] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:20:46.515 [2024-07-20 15:56:21.062608] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 19648 00:20:46.515 [2024-07-20 15:56:21.062618] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 18688 00:20:46.515 [2024-07-20 15:56:21.062629] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0514 00:20:46.515 [2024-07-20 15:56:21.062646] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:46.515 [2024-07-20 15:56:21.062656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:46.515 [2024-07-20 15:56:21.062665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:46.515 [2024-07-20 15:56:21.062674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:46.515 [2024-07-20 15:56:21.062683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:46.515 [2024-07-20 15:56:21.062693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.515 [2024-07-20 15:56:21.062702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:46.515 [2024-07-20 15:56:21.062712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.213 ms 00:20:46.515 [2024-07-20 15:56:21.062722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.064467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.515 [2024-07-20 15:56:21.064490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:46.515 [2024-07-20 15:56:21.064501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:20:46.515 [2024-07-20 15:56:21.064520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.064625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.515 [2024-07-20 15:56:21.064635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:46.515 [2024-07-20 15:56:21.064648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:46.515 [2024-07-20 15:56:21.064663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.070755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.515 [2024-07-20 15:56:21.070781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:46.515 [2024-07-20 15:56:21.070793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.515 [2024-07-20 15:56:21.070803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.070847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.515 [2024-07-20 15:56:21.070859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:46.515 [2024-07-20 15:56:21.070877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.515 [2024-07-20 15:56:21.070892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.070930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.515 [2024-07-20 15:56:21.070942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:46.515 [2024-07-20 15:56:21.070952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.515 [2024-07-20 15:56:21.070962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.070977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.515 [2024-07-20 15:56:21.070988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:46.515 [2024-07-20 15:56:21.070998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.515 [2024-07-20 15:56:21.071007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.082321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.515 [2024-07-20 15:56:21.082369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:46.515 [2024-07-20 15:56:21.082381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.515 [2024-07-20 15:56:21.082391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.090651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.515 [2024-07-20 15:56:21.090685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:46.515 [2024-07-20 15:56:21.090698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.515 [2024-07-20 15:56:21.090724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.090776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.515 [2024-07-20 15:56:21.090795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.515 [2024-07-20 15:56:21.090806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.515 [2024-07-20 15:56:21.090815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.090841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.515 [2024-07-20 15:56:21.090851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.515 [2024-07-20 15:56:21.090861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.515 [2024-07-20 15:56:21.090870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.090943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.515 [2024-07-20 15:56:21.090965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.515 [2024-07-20 15:56:21.090975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.515 [2024-07-20 15:56:21.090985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.091021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.515 [2024-07-20 15:56:21.091033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:46.515 [2024-07-20 15:56:21.091043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.515 [2024-07-20 15:56:21.091053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.091091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.515 [2024-07-20 15:56:21.091105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.515 [2024-07-20 15:56:21.091115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.515 [2024-07-20 15:56:21.091125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.091169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.515 [2024-07-20 15:56:21.091181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.515 [2024-07-20 15:56:21.091191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.515 [2024-07-20 15:56:21.091201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.515 [2024-07-20 15:56:21.091315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 200.473 ms, result 0 00:20:46.774 00:20:46.774 00:20:46.774 15:56:21 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:48.679 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:48.679 15:56:22 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:48.679 15:56:22 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:20:48.679 15:56:22 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:48.679 15:56:23 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:48.679 15:56:23 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:48.679 Process with pid 89414 is not found 00:20:48.679 15:56:23 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 89414 00:20:48.679 15:56:23 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 89414 ']' 00:20:48.679 15:56:23 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 89414 00:20:48.679 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (89414) - No such process 00:20:48.679 15:56:23 ftl.ftl_restore -- common/autotest_common.sh@973 -- # echo 'Process with pid 89414 is not found' 00:20:48.679 15:56:23 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:20:48.679 Remove shared memory files 00:20:48.679 15:56:23 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:48.679 15:56:23 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:20:48.679 15:56:23 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:20:48.679 15:56:23 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:20:48.679 15:56:23 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:48.679 15:56:23 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:20:48.679 00:20:48.679 real 3m2.867s 00:20:48.679 user 2m51.665s 00:20:48.679 sys 0m12.202s 00:20:48.679 15:56:23 ftl.ftl_restore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:48.679 15:56:23 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:48.679 ************************************ 00:20:48.679 END TEST ftl_restore 00:20:48.679 ************************************ 00:20:48.679 15:56:23 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:48.679 15:56:23 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:48.679 15:56:23 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:48.679 15:56:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:48.679 ************************************ 00:20:48.679 START TEST ftl_dirty_shutdown 00:20:48.679 ************************************ 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:48.679 * Looking for test storage... 00:20:48.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:48.679 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=91384 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 91384 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@827 -- # '[' -z 91384 ']' 00:20:48.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:48.680 15:56:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:48.680 [2024-07-20 15:56:23.461394] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:48.680 [2024-07-20 15:56:23.461927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91384 ] 00:20:48.938 [2024-07-20 15:56:23.614100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.938 [2024-07-20 15:56:23.655934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # return 0 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:49.872 15:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:50.130 15:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:50.130 { 00:20:50.130 "name": "nvme0n1", 00:20:50.130 "aliases": [ 00:20:50.130 "a07a4304-8217-44c1-8876-45e6376bfda7" 00:20:50.130 ], 00:20:50.130 "product_name": "NVMe disk", 00:20:50.130 "block_size": 4096, 00:20:50.130 "num_blocks": 1310720, 00:20:50.130 "uuid": "a07a4304-8217-44c1-8876-45e6376bfda7", 00:20:50.130 "assigned_rate_limits": { 00:20:50.130 "rw_ios_per_sec": 0, 00:20:50.130 "rw_mbytes_per_sec": 0, 00:20:50.130 "r_mbytes_per_sec": 0, 00:20:50.130 "w_mbytes_per_sec": 0 00:20:50.130 }, 00:20:50.130 "claimed": true, 00:20:50.130 "claim_type": "read_many_write_one", 00:20:50.130 "zoned": false, 00:20:50.130 "supported_io_types": { 00:20:50.130 "read": true, 00:20:50.130 "write": true, 00:20:50.130 "unmap": true, 00:20:50.130 "write_zeroes": true, 00:20:50.130 "flush": true, 00:20:50.130 "reset": true, 00:20:50.130 "compare": true, 00:20:50.130 "compare_and_write": false, 00:20:50.130 "abort": true, 00:20:50.130 "nvme_admin": true, 00:20:50.130 "nvme_io": true 00:20:50.130 }, 00:20:50.130 "driver_specific": { 00:20:50.130 "nvme": [ 00:20:50.130 { 00:20:50.130 "pci_address": "0000:00:11.0", 00:20:50.130 "trid": { 00:20:50.130 "trtype": "PCIe", 00:20:50.130 "traddr": "0000:00:11.0" 00:20:50.130 }, 00:20:50.130 "ctrlr_data": { 00:20:50.130 "cntlid": 0, 00:20:50.130 "vendor_id": "0x1b36", 00:20:50.130 "model_number": "QEMU NVMe Ctrl", 00:20:50.130 "serial_number": "12341", 00:20:50.130 "firmware_revision": "8.0.0", 00:20:50.130 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:50.130 "oacs": { 00:20:50.130 "security": 0, 00:20:50.130 "format": 1, 00:20:50.130 "firmware": 0, 00:20:50.130 "ns_manage": 1 00:20:50.130 }, 00:20:50.130 "multi_ctrlr": false, 00:20:50.130 "ana_reporting": false 00:20:50.130 }, 00:20:50.130 "vs": { 00:20:50.130 "nvme_version": "1.4" 00:20:50.130 }, 00:20:50.130 "ns_data": { 00:20:50.130 "id": 1, 00:20:50.130 "can_share": false 00:20:50.130 } 00:20:50.130 } 00:20:50.130 ], 00:20:50.130 "mp_policy": "active_passive" 00:20:50.130 } 00:20:50.130 } 00:20:50.130 ]' 00:20:50.130 15:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:50.130 15:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:50.130 15:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:50.130 15:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:20:50.130 15:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:20:50.130 15:56:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:20:50.130 15:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:20:50.130 15:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:50.130 15:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:20:50.130 15:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:50.130 15:56:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:50.388 15:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=9b54fc42-63e5-4da0-87a7-adbc1eebe43b 00:20:50.388 15:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:20:50.388 15:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b54fc42-63e5-4da0-87a7-adbc1eebe43b 00:20:50.647 15:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:50.647 15:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=4e598205-c787-430f-8b33-bf0c3ba5b669 00:20:50.647 15:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4e598205-c787-430f-8b33-bf0c3ba5b669 00:20:50.905 15:56:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=ccce4217-ba9b-4f69-829b-4481a8395001 00:20:50.905 15:56:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:20:50.905 15:56:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ccce4217-ba9b-4f69-829b-4481a8395001 00:20:50.905 15:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:20:50.905 15:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:50.905 15:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=ccce4217-ba9b-4f69-829b-4481a8395001 00:20:50.905 15:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:20:50.905 15:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size ccce4217-ba9b-4f69-829b-4481a8395001 00:20:50.905 15:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=ccce4217-ba9b-4f69-829b-4481a8395001 00:20:50.905 15:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:50.905 15:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:50.905 15:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:50.905 15:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ccce4217-ba9b-4f69-829b-4481a8395001 00:20:51.164 15:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:51.164 { 00:20:51.164 "name": "ccce4217-ba9b-4f69-829b-4481a8395001", 00:20:51.164 "aliases": [ 00:20:51.164 "lvs/nvme0n1p0" 00:20:51.164 ], 00:20:51.164 "product_name": "Logical Volume", 00:20:51.164 "block_size": 4096, 00:20:51.164 "num_blocks": 26476544, 00:20:51.164 "uuid": "ccce4217-ba9b-4f69-829b-4481a8395001", 00:20:51.164 "assigned_rate_limits": { 00:20:51.164 "rw_ios_per_sec": 0, 00:20:51.164 "rw_mbytes_per_sec": 0, 00:20:51.164 "r_mbytes_per_sec": 0, 00:20:51.164 "w_mbytes_per_sec": 0 00:20:51.164 }, 00:20:51.164 "claimed": false, 00:20:51.164 "zoned": false, 00:20:51.164 "supported_io_types": { 00:20:51.164 "read": true, 00:20:51.164 "write": true, 00:20:51.164 "unmap": true, 00:20:51.164 "write_zeroes": true, 00:20:51.164 "flush": false, 00:20:51.164 "reset": true, 00:20:51.164 "compare": false, 00:20:51.164 "compare_and_write": false, 00:20:51.164 "abort": false, 00:20:51.164 "nvme_admin": false, 00:20:51.164 "nvme_io": false 00:20:51.164 }, 00:20:51.164 "driver_specific": { 00:20:51.164 "lvol": { 00:20:51.164 "lvol_store_uuid": "4e598205-c787-430f-8b33-bf0c3ba5b669", 00:20:51.164 "base_bdev": "nvme0n1", 00:20:51.164 "thin_provision": true, 00:20:51.164 "num_allocated_clusters": 0, 00:20:51.164 "snapshot": false, 00:20:51.164 "clone": false, 00:20:51.164 "esnap_clone": false 00:20:51.164 } 00:20:51.164 } 00:20:51.164 } 00:20:51.164 ]' 00:20:51.164 15:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:51.164 15:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:51.164 15:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:51.164 15:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:20:51.164 15:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:20:51.164 15:56:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:20:51.164 15:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:20:51.164 15:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:20:51.164 15:56:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:51.422 15:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:51.422 15:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:51.422 15:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size ccce4217-ba9b-4f69-829b-4481a8395001 00:20:51.422 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=ccce4217-ba9b-4f69-829b-4481a8395001 00:20:51.422 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:51.422 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:51.422 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:51.422 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ccce4217-ba9b-4f69-829b-4481a8395001 00:20:51.679 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:51.679 { 00:20:51.679 "name": "ccce4217-ba9b-4f69-829b-4481a8395001", 00:20:51.679 "aliases": [ 00:20:51.679 "lvs/nvme0n1p0" 00:20:51.679 ], 00:20:51.679 "product_name": "Logical Volume", 00:20:51.679 "block_size": 4096, 00:20:51.679 "num_blocks": 26476544, 00:20:51.679 "uuid": "ccce4217-ba9b-4f69-829b-4481a8395001", 00:20:51.679 "assigned_rate_limits": { 00:20:51.679 "rw_ios_per_sec": 0, 00:20:51.679 "rw_mbytes_per_sec": 0, 00:20:51.679 "r_mbytes_per_sec": 0, 00:20:51.679 "w_mbytes_per_sec": 0 00:20:51.679 }, 00:20:51.679 "claimed": false, 00:20:51.679 "zoned": false, 00:20:51.679 "supported_io_types": { 00:20:51.679 "read": true, 00:20:51.679 "write": true, 00:20:51.679 "unmap": true, 00:20:51.679 "write_zeroes": true, 00:20:51.679 "flush": false, 00:20:51.679 "reset": true, 00:20:51.679 "compare": false, 00:20:51.679 "compare_and_write": false, 00:20:51.679 "abort": false, 00:20:51.679 "nvme_admin": false, 00:20:51.679 "nvme_io": false 00:20:51.679 }, 00:20:51.679 "driver_specific": { 00:20:51.679 "lvol": { 00:20:51.679 "lvol_store_uuid": "4e598205-c787-430f-8b33-bf0c3ba5b669", 00:20:51.679 "base_bdev": "nvme0n1", 00:20:51.679 "thin_provision": true, 00:20:51.679 "num_allocated_clusters": 0, 00:20:51.679 "snapshot": false, 00:20:51.679 "clone": false, 00:20:51.679 "esnap_clone": false 00:20:51.679 } 00:20:51.679 } 00:20:51.679 } 00:20:51.679 ]' 00:20:51.679 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:51.679 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:51.679 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:51.679 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:20:51.679 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:20:51.679 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:20:51.679 15:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:20:51.679 15:56:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:51.937 15:56:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:20:51.937 15:56:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size ccce4217-ba9b-4f69-829b-4481a8395001 00:20:51.937 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=ccce4217-ba9b-4f69-829b-4481a8395001 00:20:51.937 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:51.937 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:51.937 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:51.937 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ccce4217-ba9b-4f69-829b-4481a8395001 00:20:52.195 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:52.195 { 00:20:52.195 "name": "ccce4217-ba9b-4f69-829b-4481a8395001", 00:20:52.195 "aliases": [ 00:20:52.195 "lvs/nvme0n1p0" 00:20:52.195 ], 00:20:52.195 "product_name": "Logical Volume", 00:20:52.195 "block_size": 4096, 00:20:52.195 "num_blocks": 26476544, 00:20:52.195 "uuid": "ccce4217-ba9b-4f69-829b-4481a8395001", 00:20:52.195 "assigned_rate_limits": { 00:20:52.195 "rw_ios_per_sec": 0, 00:20:52.195 "rw_mbytes_per_sec": 0, 00:20:52.195 "r_mbytes_per_sec": 0, 00:20:52.195 "w_mbytes_per_sec": 0 00:20:52.195 }, 00:20:52.195 "claimed": false, 00:20:52.195 "zoned": false, 00:20:52.195 "supported_io_types": { 00:20:52.195 "read": true, 00:20:52.195 "write": true, 00:20:52.195 "unmap": true, 00:20:52.195 "write_zeroes": true, 00:20:52.195 "flush": false, 00:20:52.195 "reset": true, 00:20:52.195 "compare": false, 00:20:52.195 "compare_and_write": false, 00:20:52.195 "abort": false, 00:20:52.195 "nvme_admin": false, 00:20:52.195 "nvme_io": false 00:20:52.195 }, 00:20:52.195 "driver_specific": { 00:20:52.195 "lvol": { 00:20:52.195 "lvol_store_uuid": "4e598205-c787-430f-8b33-bf0c3ba5b669", 00:20:52.195 "base_bdev": "nvme0n1", 00:20:52.195 "thin_provision": true, 00:20:52.195 "num_allocated_clusters": 0, 00:20:52.195 "snapshot": false, 00:20:52.195 "clone": false, 00:20:52.195 "esnap_clone": false 00:20:52.195 } 00:20:52.195 } 00:20:52.195 } 00:20:52.195 ]' 00:20:52.195 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:52.195 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:52.195 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:52.195 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:20:52.195 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:20:52.195 15:56:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:20:52.195 15:56:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:20:52.195 15:56:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ccce4217-ba9b-4f69-829b-4481a8395001 --l2p_dram_limit 10' 00:20:52.195 15:56:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:20:52.195 15:56:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:20:52.195 15:56:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:52.195 15:56:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ccce4217-ba9b-4f69-829b-4481a8395001 --l2p_dram_limit 10 -c nvc0n1p0 00:20:52.195 [2024-07-20 15:56:26.984864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.195 [2024-07-20 15:56:26.984914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:52.195 [2024-07-20 15:56:26.984932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:52.195 [2024-07-20 15:56:26.984960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.195 [2024-07-20 15:56:26.985025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.195 [2024-07-20 15:56:26.985043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.195 [2024-07-20 15:56:26.985056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:52.195 [2024-07-20 15:56:26.985068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.195 [2024-07-20 15:56:26.985095] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:52.195 [2024-07-20 15:56:26.985381] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:52.195 [2024-07-20 15:56:26.985423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.195 [2024-07-20 15:56:26.985442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.195 [2024-07-20 15:56:26.985461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:20:52.195 [2024-07-20 15:56:26.985478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.195 [2024-07-20 15:56:26.985565] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b9b94abc-a632-42a4-8e5c-1d0136f370ff 00:20:52.195 [2024-07-20 15:56:26.987079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.195 [2024-07-20 15:56:26.987115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:52.195 [2024-07-20 15:56:26.987128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:52.195 [2024-07-20 15:56:26.987145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.454 [2024-07-20 15:56:26.995314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.454 [2024-07-20 15:56:26.995481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.454 [2024-07-20 15:56:26.995561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.063 ms 00:20:52.454 [2024-07-20 15:56:26.995604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.454 [2024-07-20 15:56:26.995720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.454 [2024-07-20 15:56:26.995766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.454 [2024-07-20 15:56:26.995799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:52.454 [2024-07-20 15:56:26.995832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.454 [2024-07-20 15:56:26.995996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.454 [2024-07-20 15:56:26.996047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:52.454 [2024-07-20 15:56:26.996080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:52.454 [2024-07-20 15:56:26.996168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.454 [2024-07-20 15:56:26.996226] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:52.454 [2024-07-20 15:56:26.998174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.454 [2024-07-20 15:56:26.998312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.454 [2024-07-20 15:56:26.998417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.956 ms 00:20:52.454 [2024-07-20 15:56:26.998456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.454 [2024-07-20 15:56:26.998593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.454 [2024-07-20 15:56:26.998668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:52.454 [2024-07-20 15:56:26.998711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:52.454 [2024-07-20 15:56:26.998742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.454 [2024-07-20 15:56:26.998799] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:52.454 [2024-07-20 15:56:26.999001] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:52.454 [2024-07-20 15:56:26.999064] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:52.454 [2024-07-20 15:56:26.999116] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:52.454 [2024-07-20 15:56:26.999226] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:52.454 [2024-07-20 15:56:26.999282] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:52.454 [2024-07-20 15:56:26.999334] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:52.454 [2024-07-20 15:56:26.999377] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:52.454 [2024-07-20 15:56:26.999462] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:52.454 [2024-07-20 15:56:26.999477] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:52.454 [2024-07-20 15:56:26.999492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.454 [2024-07-20 15:56:26.999503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:52.454 [2024-07-20 15:56:26.999518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:20:52.454 [2024-07-20 15:56:26.999528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.454 [2024-07-20 15:56:26.999613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.454 [2024-07-20 15:56:26.999632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:52.454 [2024-07-20 15:56:26.999650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:52.454 [2024-07-20 15:56:26.999660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.454 [2024-07-20 15:56:26.999768] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:52.454 [2024-07-20 15:56:26.999782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:52.454 [2024-07-20 15:56:26.999796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:52.454 [2024-07-20 15:56:26.999808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.454 [2024-07-20 15:56:26.999821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:52.454 [2024-07-20 15:56:26.999831] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:52.454 [2024-07-20 15:56:26.999846] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:52.454 [2024-07-20 15:56:26.999856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:52.454 [2024-07-20 15:56:26.999869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:52.454 [2024-07-20 15:56:26.999879] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:52.454 [2024-07-20 15:56:26.999891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:52.454 [2024-07-20 15:56:26.999901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:52.454 [2024-07-20 15:56:26.999913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:52.454 [2024-07-20 15:56:26.999924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:52.454 [2024-07-20 15:56:26.999939] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:52.454 [2024-07-20 15:56:26.999948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.454 [2024-07-20 15:56:26.999960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:52.454 [2024-07-20 15:56:26.999970] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:52.454 [2024-07-20 15:56:27.000001] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.454 [2024-07-20 15:56:27.000012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:52.454 [2024-07-20 15:56:27.000024] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:52.454 [2024-07-20 15:56:27.000035] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.454 [2024-07-20 15:56:27.000048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:52.454 [2024-07-20 15:56:27.000058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:52.454 [2024-07-20 15:56:27.000070] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.454 [2024-07-20 15:56:27.000081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:52.454 [2024-07-20 15:56:27.000093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:52.454 [2024-07-20 15:56:27.000104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.454 [2024-07-20 15:56:27.000116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:52.454 [2024-07-20 15:56:27.000127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:52.454 [2024-07-20 15:56:27.000141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.454 [2024-07-20 15:56:27.000151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:52.454 [2024-07-20 15:56:27.000165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:52.454 [2024-07-20 15:56:27.000174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:52.454 [2024-07-20 15:56:27.000186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:52.454 [2024-07-20 15:56:27.000196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:52.454 [2024-07-20 15:56:27.000209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:52.454 [2024-07-20 15:56:27.000218] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:52.454 [2024-07-20 15:56:27.000230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:52.454 [2024-07-20 15:56:27.000240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.454 [2024-07-20 15:56:27.000252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:52.454 [2024-07-20 15:56:27.000262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:52.454 [2024-07-20 15:56:27.000275] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.454 [2024-07-20 15:56:27.000284] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:52.454 [2024-07-20 15:56:27.000297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:52.454 [2024-07-20 15:56:27.000308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:52.454 [2024-07-20 15:56:27.000331] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.454 [2024-07-20 15:56:27.000345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:52.454 [2024-07-20 15:56:27.000371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:52.454 [2024-07-20 15:56:27.000382] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:52.454 [2024-07-20 15:56:27.000394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:52.454 [2024-07-20 15:56:27.000404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:52.454 [2024-07-20 15:56:27.000416] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:52.454 [2024-07-20 15:56:27.000432] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:52.454 [2024-07-20 15:56:27.000451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:52.454 [2024-07-20 15:56:27.000474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:52.454 [2024-07-20 15:56:27.000489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:52.454 [2024-07-20 15:56:27.000500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:52.454 [2024-07-20 15:56:27.000514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:52.454 [2024-07-20 15:56:27.000525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:52.454 [2024-07-20 15:56:27.000538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:52.454 [2024-07-20 15:56:27.000549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:52.455 [2024-07-20 15:56:27.000565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:52.455 [2024-07-20 15:56:27.000576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:52.455 [2024-07-20 15:56:27.000589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:52.455 [2024-07-20 15:56:27.000600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:52.455 [2024-07-20 15:56:27.000613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:52.455 [2024-07-20 15:56:27.000624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:52.455 [2024-07-20 15:56:27.000637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:52.455 [2024-07-20 15:56:27.000648] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:52.455 [2024-07-20 15:56:27.000662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:52.455 [2024-07-20 15:56:27.000674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:52.455 [2024-07-20 15:56:27.000687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:52.455 [2024-07-20 15:56:27.000698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:52.455 [2024-07-20 15:56:27.000712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:52.455 [2024-07-20 15:56:27.000724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.455 [2024-07-20 15:56:27.000737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:52.455 [2024-07-20 15:56:27.000748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:20:52.455 [2024-07-20 15:56:27.000765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.455 [2024-07-20 15:56:27.000808] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:52.455 [2024-07-20 15:56:27.000824] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:55.754 [2024-07-20 15:56:30.348536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.348609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:55.754 [2024-07-20 15:56:30.348634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3353.162 ms 00:20:55.754 [2024-07-20 15:56:30.348654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.359847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.359902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.754 [2024-07-20 15:56:30.359927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.093 ms 00:20:55.754 [2024-07-20 15:56:30.359948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.360072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.360103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:55.754 [2024-07-20 15:56:30.360120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:55.754 [2024-07-20 15:56:30.360149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.370445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.370493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.754 [2024-07-20 15:56:30.370514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.249 ms 00:20:55.754 [2024-07-20 15:56:30.370533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.370595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.370616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:55.754 [2024-07-20 15:56:30.370632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:55.754 [2024-07-20 15:56:30.370649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.371232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.371280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:55.754 [2024-07-20 15:56:30.371299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:20:55.754 [2024-07-20 15:56:30.371353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.371548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.371590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:55.754 [2024-07-20 15:56:30.371618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:20:55.754 [2024-07-20 15:56:30.371638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.378801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.378843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:55.754 [2024-07-20 15:56:30.378864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.140 ms 00:20:55.754 [2024-07-20 15:56:30.378883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.386598] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:55.754 [2024-07-20 15:56:30.389852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.389885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:55.754 [2024-07-20 15:56:30.389910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.885 ms 00:20:55.754 [2024-07-20 15:56:30.389925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.470978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.471035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:55.754 [2024-07-20 15:56:30.471065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.138 ms 00:20:55.754 [2024-07-20 15:56:30.471084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.471320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.471342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:55.754 [2024-07-20 15:56:30.471382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:20:55.754 [2024-07-20 15:56:30.471399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.475188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.475230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:55.754 [2024-07-20 15:56:30.475255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.754 ms 00:20:55.754 [2024-07-20 15:56:30.475276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.478165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.478202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:55.754 [2024-07-20 15:56:30.478237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.840 ms 00:20:55.754 [2024-07-20 15:56:30.478251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.478589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.478618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:55.754 [2024-07-20 15:56:30.478641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:20:55.754 [2024-07-20 15:56:30.478658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.516667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.516712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:55.754 [2024-07-20 15:56:30.516740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.009 ms 00:20:55.754 [2024-07-20 15:56:30.516769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.521055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.521093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:55.754 [2024-07-20 15:56:30.521118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.237 ms 00:20:55.754 [2024-07-20 15:56:30.521133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.524248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.524285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:55.754 [2024-07-20 15:56:30.524308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:20:55.754 [2024-07-20 15:56:30.524323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.527899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.527936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:55.754 [2024-07-20 15:56:30.527961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.503 ms 00:20:55.754 [2024-07-20 15:56:30.527976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.528048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.528075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:55.754 [2024-07-20 15:56:30.528098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:55.754 [2024-07-20 15:56:30.528115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.528200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.754 [2024-07-20 15:56:30.528218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:55.754 [2024-07-20 15:56:30.528247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:55.754 [2024-07-20 15:56:30.528263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.754 [2024-07-20 15:56:30.529407] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3549.900 ms, result 0 00:20:55.754 { 00:20:55.754 "name": "ftl0", 00:20:55.754 "uuid": "b9b94abc-a632-42a4-8e5c-1d0136f370ff" 00:20:55.754 } 00:20:56.013 15:56:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:20:56.013 15:56:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:56.013 15:56:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:20:56.013 15:56:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:20:56.013 15:56:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:20:56.273 /dev/nbd0 00:20:56.273 15:56:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:20:56.273 15:56:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:20:56.273 15:56:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@865 -- # local i 00:20:56.273 15:56:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:20:56.273 15:56:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:20:56.273 15:56:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:20:56.273 15:56:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # break 00:20:56.273 15:56:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:20:56.273 15:56:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:20:56.273 15:56:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:20:56.273 1+0 records in 00:20:56.273 1+0 records out 00:20:56.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597993 s, 6.8 MB/s 00:20:56.273 15:56:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:56.273 15:56:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # size=4096 00:20:56.273 15:56:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:56.273 15:56:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:20:56.273 15:56:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # return 0 00:20:56.273 15:56:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:20:56.538 [2024-07-20 15:56:31.114919] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:56.538 [2024-07-20 15:56:31.115081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91525 ] 00:20:56.538 [2024-07-20 15:56:31.265700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.538 [2024-07-20 15:56:31.306406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.546  Copying: 217/1024 [MB] (217 MBps) Copying: 437/1024 [MB] (220 MBps) Copying: 659/1024 [MB] (221 MBps) Copying: 871/1024 [MB] (211 MBps) Copying: 1024/1024 [MB] (average 217 MBps) 00:21:01.546 00:21:01.546 15:56:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:03.449 15:56:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:21:03.449 [2024-07-20 15:56:38.073674] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:21:03.449 [2024-07-20 15:56:38.073844] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91601 ] 00:21:03.449 [2024-07-20 15:56:38.226824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.707 [2024-07-20 15:56:38.268067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.795  Copying: 17/1024 [MB] (17 MBps) Copying: 34/1024 [MB] (17 MBps) Copying: 51/1024 [MB] (16 MBps) Copying: 67/1024 [MB] (15 MBps) Copying: 84/1024 [MB] (16 MBps) Copying: 100/1024 [MB] (16 MBps) Copying: 118/1024 [MB] (17 MBps) Copying: 136/1024 [MB] (17 MBps) Copying: 154/1024 [MB] (17 MBps) Copying: 172/1024 [MB] (18 MBps) Copying: 190/1024 [MB] (18 MBps) Copying: 209/1024 [MB] (18 MBps) Copying: 227/1024 [MB] (18 MBps) Copying: 244/1024 [MB] (17 MBps) Copying: 263/1024 [MB] (18 MBps) Copying: 281/1024 [MB] (18 MBps) Copying: 300/1024 [MB] (18 MBps) Copying: 317/1024 [MB] (17 MBps) Copying: 335/1024 [MB] (17 MBps) Copying: 353/1024 [MB] (17 MBps) Copying: 371/1024 [MB] (18 MBps) Copying: 390/1024 [MB] (18 MBps) Copying: 409/1024 [MB] (19 MBps) Copying: 427/1024 [MB] (18 MBps) Copying: 446/1024 [MB] (18 MBps) Copying: 465/1024 [MB] (19 MBps) Copying: 484/1024 [MB] (19 MBps) Copying: 504/1024 [MB] (19 MBps) Copying: 523/1024 [MB] (19 MBps) Copying: 542/1024 [MB] (18 MBps) Copying: 560/1024 [MB] (18 MBps) Copying: 579/1024 [MB] (18 MBps) Copying: 597/1024 [MB] (18 MBps) Copying: 616/1024 [MB] (18 MBps) Copying: 634/1024 [MB] (18 MBps) Copying: 652/1024 [MB] (18 MBps) Copying: 671/1024 [MB] (18 MBps) Copying: 689/1024 [MB] (18 MBps) Copying: 708/1024 [MB] (18 MBps) Copying: 727/1024 [MB] (18 MBps) Copying: 747/1024 [MB] (19 MBps) Copying: 766/1024 [MB] (19 MBps) Copying: 785/1024 [MB] (18 MBps) Copying: 803/1024 [MB] (18 MBps) Copying: 821/1024 [MB] (18 MBps) Copying: 839/1024 [MB] (17 MBps) Copying: 857/1024 [MB] (18 MBps) Copying: 876/1024 [MB] (18 MBps) Copying: 894/1024 [MB] (18 MBps) Copying: 913/1024 [MB] (18 MBps) Copying: 932/1024 [MB] (19 MBps) Copying: 950/1024 [MB] (18 MBps) Copying: 969/1024 [MB] (18 MBps) Copying: 987/1024 [MB] (18 MBps) Copying: 1006/1024 [MB] (18 MBps) Copying: 1024/1024 [MB] (average 18 MBps) 00:21:59.795 00:21:59.795 15:57:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:21:59.795 15:57:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:00.054 15:57:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:00.054 [2024-07-20 15:57:34.777782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.054 [2024-07-20 15:57:34.777855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:00.054 [2024-07-20 15:57:34.777879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:00.054 [2024-07-20 15:57:34.777893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.054 [2024-07-20 15:57:34.777925] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:00.054 [2024-07-20 15:57:34.778615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.054 [2024-07-20 15:57:34.778636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:00.054 [2024-07-20 15:57:34.778663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:22:00.054 [2024-07-20 15:57:34.778673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.054 [2024-07-20 15:57:34.780548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.054 [2024-07-20 15:57:34.780589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:00.054 [2024-07-20 15:57:34.780604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.848 ms 00:22:00.054 [2024-07-20 15:57:34.780615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.055 [2024-07-20 15:57:34.797640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.055 [2024-07-20 15:57:34.797676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:00.055 [2024-07-20 15:57:34.797710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.028 ms 00:22:00.055 [2024-07-20 15:57:34.797720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.055 [2024-07-20 15:57:34.802620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.055 [2024-07-20 15:57:34.802664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:00.055 [2024-07-20 15:57:34.802680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.867 ms 00:22:00.055 [2024-07-20 15:57:34.802689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.055 [2024-07-20 15:57:34.804439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.055 [2024-07-20 15:57:34.804472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:00.055 [2024-07-20 15:57:34.804490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.676 ms 00:22:00.055 [2024-07-20 15:57:34.804500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.055 [2024-07-20 15:57:34.809265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.055 [2024-07-20 15:57:34.809307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:00.055 [2024-07-20 15:57:34.809322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.733 ms 00:22:00.055 [2024-07-20 15:57:34.809332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.055 [2024-07-20 15:57:34.809454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.055 [2024-07-20 15:57:34.809475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:00.055 [2024-07-20 15:57:34.809490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:00.055 [2024-07-20 15:57:34.809499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.055 [2024-07-20 15:57:34.811585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.055 [2024-07-20 15:57:34.811619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:00.055 [2024-07-20 15:57:34.811633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.065 ms 00:22:00.055 [2024-07-20 15:57:34.811642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.055 [2024-07-20 15:57:34.813098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.055 [2024-07-20 15:57:34.813131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:00.055 [2024-07-20 15:57:34.813148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:22:00.055 [2024-07-20 15:57:34.813158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.055 [2024-07-20 15:57:34.814395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.055 [2024-07-20 15:57:34.814427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:00.055 [2024-07-20 15:57:34.814441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.203 ms 00:22:00.055 [2024-07-20 15:57:34.814460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.055 [2024-07-20 15:57:34.815600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.055 [2024-07-20 15:57:34.815635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:00.055 [2024-07-20 15:57:34.815649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:22:00.055 [2024-07-20 15:57:34.815658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.055 [2024-07-20 15:57:34.815689] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:00.055 [2024-07-20 15:57:34.815705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.815998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:00.055 [2024-07-20 15:57:34.816500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:00.056 [2024-07-20 15:57:34.816938] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:00.056 [2024-07-20 15:57:34.816952] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b9b94abc-a632-42a4-8e5c-1d0136f370ff 00:22:00.056 [2024-07-20 15:57:34.816963] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:00.056 [2024-07-20 15:57:34.816977] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:00.056 [2024-07-20 15:57:34.816986] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:00.056 [2024-07-20 15:57:34.816999] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:00.056 [2024-07-20 15:57:34.817008] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:00.056 [2024-07-20 15:57:34.817021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:00.056 [2024-07-20 15:57:34.817048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:00.056 [2024-07-20 15:57:34.817060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:00.056 [2024-07-20 15:57:34.817069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:00.056 [2024-07-20 15:57:34.817081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.056 [2024-07-20 15:57:34.817097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:00.056 [2024-07-20 15:57:34.817117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:22:00.056 [2024-07-20 15:57:34.817130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.818887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.056 [2024-07-20 15:57:34.818910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:00.056 [2024-07-20 15:57:34.818928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.737 ms 00:22:00.056 [2024-07-20 15:57:34.818938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.819042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.056 [2024-07-20 15:57:34.819053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:00.056 [2024-07-20 15:57:34.819069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:22:00.056 [2024-07-20 15:57:34.819087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.825988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.056 [2024-07-20 15:57:34.826013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.056 [2024-07-20 15:57:34.826028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.056 [2024-07-20 15:57:34.826038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.826099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.056 [2024-07-20 15:57:34.826112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.056 [2024-07-20 15:57:34.826128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.056 [2024-07-20 15:57:34.826145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.826233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.056 [2024-07-20 15:57:34.826246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.056 [2024-07-20 15:57:34.826262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.056 [2024-07-20 15:57:34.826272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.826294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.056 [2024-07-20 15:57:34.826304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.056 [2024-07-20 15:57:34.826316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.056 [2024-07-20 15:57:34.826328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.837587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.056 [2024-07-20 15:57:34.837631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.056 [2024-07-20 15:57:34.837662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.056 [2024-07-20 15:57:34.837673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.845885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.056 [2024-07-20 15:57:34.845922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.056 [2024-07-20 15:57:34.845941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.056 [2024-07-20 15:57:34.845951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.846026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.056 [2024-07-20 15:57:34.846038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:00.056 [2024-07-20 15:57:34.846054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.056 [2024-07-20 15:57:34.846064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.846105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.056 [2024-07-20 15:57:34.846116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:00.056 [2024-07-20 15:57:34.846129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.056 [2024-07-20 15:57:34.846138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.846236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.056 [2024-07-20 15:57:34.846249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:00.056 [2024-07-20 15:57:34.846262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.056 [2024-07-20 15:57:34.846272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.846316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.056 [2024-07-20 15:57:34.846327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:00.056 [2024-07-20 15:57:34.846340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.056 [2024-07-20 15:57:34.846350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.846470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.056 [2024-07-20 15:57:34.846482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:00.056 [2024-07-20 15:57:34.846498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.056 [2024-07-20 15:57:34.846514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.846567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.056 [2024-07-20 15:57:34.846580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:00.056 [2024-07-20 15:57:34.846592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.056 [2024-07-20 15:57:34.846601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.056 [2024-07-20 15:57:34.846775] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.050 ms, result 0 00:22:00.314 true 00:22:00.314 15:57:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 91384 00:22:00.314 15:57:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid91384 00:22:00.314 15:57:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:00.314 [2024-07-20 15:57:34.947169] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:00.314 [2024-07-20 15:57:34.947297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92185 ] 00:22:00.314 [2024-07-20 15:57:35.098618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.572 [2024-07-20 15:57:35.140171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.318  Copying: 223/1024 [MB] (223 MBps) Copying: 447/1024 [MB] (224 MBps) Copying: 671/1024 [MB] (223 MBps) Copying: 886/1024 [MB] (214 MBps) Copying: 1024/1024 [MB] (average 220 MBps) 00:22:05.319 00:22:05.319 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 91384 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:05.319 15:57:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:05.578 [2024-07-20 15:57:40.166247] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:05.578 [2024-07-20 15:57:40.166381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92239 ] 00:22:05.578 [2024-07-20 15:57:40.318124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.578 [2024-07-20 15:57:40.358675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.838 [2024-07-20 15:57:40.458345] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:05.838 [2024-07-20 15:57:40.458443] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:05.838 [2024-07-20 15:57:40.519520] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:05.838 [2024-07-20 15:57:40.519835] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:05.838 [2024-07-20 15:57:40.520138] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:06.099 [2024-07-20 15:57:40.826067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.099 [2024-07-20 15:57:40.826114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:06.099 [2024-07-20 15:57:40.826129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:06.099 [2024-07-20 15:57:40.826139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.099 [2024-07-20 15:57:40.826205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.099 [2024-07-20 15:57:40.826217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:06.099 [2024-07-20 15:57:40.826228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:06.099 [2024-07-20 15:57:40.826237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.099 [2024-07-20 15:57:40.826267] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:06.099 [2024-07-20 15:57:40.826595] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:06.099 [2024-07-20 15:57:40.826620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.099 [2024-07-20 15:57:40.826630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:06.099 [2024-07-20 15:57:40.826648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:22:06.099 [2024-07-20 15:57:40.826665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.099 [2024-07-20 15:57:40.828066] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:06.099 [2024-07-20 15:57:40.830575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.099 [2024-07-20 15:57:40.830608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:06.099 [2024-07-20 15:57:40.830621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.514 ms 00:22:06.099 [2024-07-20 15:57:40.830631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.099 [2024-07-20 15:57:40.830694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.099 [2024-07-20 15:57:40.830709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:06.099 [2024-07-20 15:57:40.830727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:06.099 [2024-07-20 15:57:40.830737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.099 [2024-07-20 15:57:40.837444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.099 [2024-07-20 15:57:40.837468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:06.099 [2024-07-20 15:57:40.837504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.664 ms 00:22:06.099 [2024-07-20 15:57:40.837514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.099 [2024-07-20 15:57:40.837602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.099 [2024-07-20 15:57:40.837620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:06.099 [2024-07-20 15:57:40.837630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:06.099 [2024-07-20 15:57:40.837648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.099 [2024-07-20 15:57:40.837699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.099 [2024-07-20 15:57:40.837711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:06.099 [2024-07-20 15:57:40.837727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:06.099 [2024-07-20 15:57:40.837744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.099 [2024-07-20 15:57:40.837773] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:06.099 [2024-07-20 15:57:40.839420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.099 [2024-07-20 15:57:40.839446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:06.099 [2024-07-20 15:57:40.839458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.660 ms 00:22:06.099 [2024-07-20 15:57:40.839472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.099 [2024-07-20 15:57:40.839503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.099 [2024-07-20 15:57:40.839513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:06.099 [2024-07-20 15:57:40.839524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:06.099 [2024-07-20 15:57:40.839533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.099 [2024-07-20 15:57:40.839566] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:06.099 [2024-07-20 15:57:40.839590] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:06.099 [2024-07-20 15:57:40.839639] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:06.099 [2024-07-20 15:57:40.839666] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:06.099 [2024-07-20 15:57:40.839746] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:06.099 [2024-07-20 15:57:40.839759] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:06.099 [2024-07-20 15:57:40.839793] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:06.099 [2024-07-20 15:57:40.839806] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:06.099 [2024-07-20 15:57:40.839817] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:06.099 [2024-07-20 15:57:40.839828] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:06.099 [2024-07-20 15:57:40.839837] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:06.099 [2024-07-20 15:57:40.839847] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:06.099 [2024-07-20 15:57:40.839860] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:06.099 [2024-07-20 15:57:40.839870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.099 [2024-07-20 15:57:40.839879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:06.099 [2024-07-20 15:57:40.839890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:22:06.099 [2024-07-20 15:57:40.839909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.099 [2024-07-20 15:57:40.839976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.099 [2024-07-20 15:57:40.839986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:06.099 [2024-07-20 15:57:40.839996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:06.099 [2024-07-20 15:57:40.840005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.099 [2024-07-20 15:57:40.840086] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:06.099 [2024-07-20 15:57:40.840100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:06.099 [2024-07-20 15:57:40.840110] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.099 [2024-07-20 15:57:40.840120] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.099 [2024-07-20 15:57:40.840130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:06.099 [2024-07-20 15:57:40.840139] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:06.099 [2024-07-20 15:57:40.840148] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:06.099 [2024-07-20 15:57:40.840157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:06.099 [2024-07-20 15:57:40.840166] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:06.099 [2024-07-20 15:57:40.840175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.099 [2024-07-20 15:57:40.840184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:06.100 [2024-07-20 15:57:40.840201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:06.100 [2024-07-20 15:57:40.840213] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.100 [2024-07-20 15:57:40.840222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:06.100 [2024-07-20 15:57:40.840231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:06.100 [2024-07-20 15:57:40.840240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.100 [2024-07-20 15:57:40.840250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:06.100 [2024-07-20 15:57:40.840259] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:06.100 [2024-07-20 15:57:40.840268] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.100 [2024-07-20 15:57:40.840277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:06.100 [2024-07-20 15:57:40.840286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:06.100 [2024-07-20 15:57:40.840295] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.100 [2024-07-20 15:57:40.840303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:06.100 [2024-07-20 15:57:40.840313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:06.100 [2024-07-20 15:57:40.840322] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.100 [2024-07-20 15:57:40.840330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:06.100 [2024-07-20 15:57:40.840339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:06.100 [2024-07-20 15:57:40.840356] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.100 [2024-07-20 15:57:40.840367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:06.100 [2024-07-20 15:57:40.840376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:06.100 [2024-07-20 15:57:40.840397] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.100 [2024-07-20 15:57:40.840406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:06.100 [2024-07-20 15:57:40.840415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:06.100 [2024-07-20 15:57:40.840424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.100 [2024-07-20 15:57:40.840433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:06.100 [2024-07-20 15:57:40.840442] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:06.100 [2024-07-20 15:57:40.840451] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.100 [2024-07-20 15:57:40.840460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:06.100 [2024-07-20 15:57:40.840469] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:06.100 [2024-07-20 15:57:40.840478] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.100 [2024-07-20 15:57:40.840487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:06.100 [2024-07-20 15:57:40.840496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:06.100 [2024-07-20 15:57:40.840505] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.100 [2024-07-20 15:57:40.840516] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:06.100 [2024-07-20 15:57:40.840527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:06.100 [2024-07-20 15:57:40.840537] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.100 [2024-07-20 15:57:40.840546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.100 [2024-07-20 15:57:40.840556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:06.100 [2024-07-20 15:57:40.840565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:06.100 [2024-07-20 15:57:40.840575] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:06.100 [2024-07-20 15:57:40.840584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:06.100 [2024-07-20 15:57:40.840593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:06.100 [2024-07-20 15:57:40.840602] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:06.100 [2024-07-20 15:57:40.840612] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:06.100 [2024-07-20 15:57:40.840624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.100 [2024-07-20 15:57:40.840635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:06.100 [2024-07-20 15:57:40.840645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:06.100 [2024-07-20 15:57:40.840655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:06.100 [2024-07-20 15:57:40.840665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:06.100 [2024-07-20 15:57:40.840678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:06.100 [2024-07-20 15:57:40.840688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:06.100 [2024-07-20 15:57:40.840698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:06.100 [2024-07-20 15:57:40.840708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:06.100 [2024-07-20 15:57:40.840719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:06.100 [2024-07-20 15:57:40.840729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:06.100 [2024-07-20 15:57:40.840739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:06.100 [2024-07-20 15:57:40.840749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:06.100 [2024-07-20 15:57:40.840758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:06.100 [2024-07-20 15:57:40.840769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:06.100 [2024-07-20 15:57:40.840787] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:06.100 [2024-07-20 15:57:40.840801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.100 [2024-07-20 15:57:40.840812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:06.100 [2024-07-20 15:57:40.840822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:06.100 [2024-07-20 15:57:40.840832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:06.100 [2024-07-20 15:57:40.840842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:06.100 [2024-07-20 15:57:40.840856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.100 [2024-07-20 15:57:40.840866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:06.100 [2024-07-20 15:57:40.840877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:22:06.100 [2024-07-20 15:57:40.840887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.100 [2024-07-20 15:57:40.863933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.100 [2024-07-20 15:57:40.863974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.100 [2024-07-20 15:57:40.863996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.038 ms 00:22:06.100 [2024-07-20 15:57:40.864019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.100 [2024-07-20 15:57:40.864114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.100 [2024-07-20 15:57:40.864127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:06.100 [2024-07-20 15:57:40.864140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:06.100 [2024-07-20 15:57:40.864152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.100 [2024-07-20 15:57:40.874785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.100 [2024-07-20 15:57:40.874818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.100 [2024-07-20 15:57:40.874838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.582 ms 00:22:06.100 [2024-07-20 15:57:40.874848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.100 [2024-07-20 15:57:40.874901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.100 [2024-07-20 15:57:40.874912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.100 [2024-07-20 15:57:40.874923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:06.100 [2024-07-20 15:57:40.874932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.100 [2024-07-20 15:57:40.875412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.100 [2024-07-20 15:57:40.875433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.100 [2024-07-20 15:57:40.875444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:22:06.100 [2024-07-20 15:57:40.875454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.100 [2024-07-20 15:57:40.875563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.100 [2024-07-20 15:57:40.875579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.100 [2024-07-20 15:57:40.875596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:22:06.100 [2024-07-20 15:57:40.875606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.100 [2024-07-20 15:57:40.881634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.100 [2024-07-20 15:57:40.881667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.100 [2024-07-20 15:57:40.881686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.016 ms 00:22:06.100 [2024-07-20 15:57:40.881696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.100 [2024-07-20 15:57:40.884284] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:06.100 [2024-07-20 15:57:40.884319] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:06.100 [2024-07-20 15:57:40.884337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.100 [2024-07-20 15:57:40.884348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:06.100 [2024-07-20 15:57:40.884371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.553 ms 00:22:06.100 [2024-07-20 15:57:40.884380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.360 [2024-07-20 15:57:40.897021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.360 [2024-07-20 15:57:40.897066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:06.360 [2024-07-20 15:57:40.897096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.621 ms 00:22:06.360 [2024-07-20 15:57:40.897106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.360 [2024-07-20 15:57:40.898753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.360 [2024-07-20 15:57:40.898785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:06.360 [2024-07-20 15:57:40.898796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:22:06.360 [2024-07-20 15:57:40.898806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.360 [2024-07-20 15:57:40.900213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.360 [2024-07-20 15:57:40.900245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:06.360 [2024-07-20 15:57:40.900256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:22:06.360 [2024-07-20 15:57:40.900265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.360 [2024-07-20 15:57:40.900551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.360 [2024-07-20 15:57:40.900567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:06.360 [2024-07-20 15:57:40.900579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:22:06.360 [2024-07-20 15:57:40.900597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.360 [2024-07-20 15:57:40.920191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.360 [2024-07-20 15:57:40.920249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:06.360 [2024-07-20 15:57:40.920265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.606 ms 00:22:06.360 [2024-07-20 15:57:40.920275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.360 [2024-07-20 15:57:40.926225] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:06.360 [2024-07-20 15:57:40.928499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.360 [2024-07-20 15:57:40.928524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:06.360 [2024-07-20 15:57:40.928535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.169 ms 00:22:06.360 [2024-07-20 15:57:40.928561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.360 [2024-07-20 15:57:40.928611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.360 [2024-07-20 15:57:40.928622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:06.360 [2024-07-20 15:57:40.928637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:06.360 [2024-07-20 15:57:40.928659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.360 [2024-07-20 15:57:40.928732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.360 [2024-07-20 15:57:40.928744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:06.360 [2024-07-20 15:57:40.928754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:06.360 [2024-07-20 15:57:40.928763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.360 [2024-07-20 15:57:40.928787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.360 [2024-07-20 15:57:40.928797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:06.360 [2024-07-20 15:57:40.928807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:06.360 [2024-07-20 15:57:40.928829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.360 [2024-07-20 15:57:40.928858] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:06.360 [2024-07-20 15:57:40.928870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.360 [2024-07-20 15:57:40.928898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:06.360 [2024-07-20 15:57:40.928909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:06.360 [2024-07-20 15:57:40.928918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.360 [2024-07-20 15:57:40.932412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.360 [2024-07-20 15:57:40.932445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:06.360 [2024-07-20 15:57:40.932457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.478 ms 00:22:06.360 [2024-07-20 15:57:40.932467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.360 [2024-07-20 15:57:40.932535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.360 [2024-07-20 15:57:40.932547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:06.360 [2024-07-20 15:57:40.932557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:06.360 [2024-07-20 15:57:40.932567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.360 [2024-07-20 15:57:40.933770] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.441 ms, result 0 00:22:47.138  Copying: 25/1024 [MB] (25 MBps) Copying: 50/1024 [MB] (25 MBps) Copying: 75/1024 [MB] (25 MBps) Copying: 101/1024 [MB] (25 MBps) Copying: 126/1024 [MB] (25 MBps) Copying: 152/1024 [MB] (25 MBps) Copying: 177/1024 [MB] (25 MBps) Copying: 202/1024 [MB] (25 MBps) Copying: 228/1024 [MB] (25 MBps) Copying: 254/1024 [MB] (26 MBps) Copying: 279/1024 [MB] (25 MBps) Copying: 305/1024 [MB] (25 MBps) Copying: 331/1024 [MB] (25 MBps) Copying: 355/1024 [MB] (24 MBps) Copying: 380/1024 [MB] (24 MBps) Copying: 404/1024 [MB] (24 MBps) Copying: 430/1024 [MB] (25 MBps) Copying: 456/1024 [MB] (25 MBps) Copying: 481/1024 [MB] (25 MBps) Copying: 507/1024 [MB] (25 MBps) Copying: 532/1024 [MB] (24 MBps) Copying: 556/1024 [MB] (24 MBps) Copying: 581/1024 [MB] (24 MBps) Copying: 607/1024 [MB] (25 MBps) Copying: 633/1024 [MB] (26 MBps) Copying: 659/1024 [MB] (25 MBps) Copying: 685/1024 [MB] (25 MBps) Copying: 711/1024 [MB] (26 MBps) Copying: 737/1024 [MB] (25 MBps) Copying: 764/1024 [MB] (26 MBps) Copying: 790/1024 [MB] (26 MBps) Copying: 816/1024 [MB] (25 MBps) Copying: 841/1024 [MB] (25 MBps) Copying: 867/1024 [MB] (25 MBps) Copying: 892/1024 [MB] (25 MBps) Copying: 918/1024 [MB] (25 MBps) Copying: 943/1024 [MB] (25 MBps) Copying: 969/1024 [MB] (25 MBps) Copying: 994/1024 [MB] (25 MBps) Copying: 1021/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-20 15:58:21.679364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.138 [2024-07-20 15:58:21.679432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:47.138 [2024-07-20 15:58:21.679449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:47.138 [2024-07-20 15:58:21.679459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.138 [2024-07-20 15:58:21.681788] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:47.138 [2024-07-20 15:58:21.683915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.138 [2024-07-20 15:58:21.683948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:47.138 [2024-07-20 15:58:21.683961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:22:47.139 [2024-07-20 15:58:21.683982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.139 [2024-07-20 15:58:21.693670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.139 [2024-07-20 15:58:21.693705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:47.139 [2024-07-20 15:58:21.693734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.325 ms 00:22:47.139 [2024-07-20 15:58:21.693744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.139 [2024-07-20 15:58:21.717686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.139 [2024-07-20 15:58:21.717736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:47.139 [2024-07-20 15:58:21.717750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.956 ms 00:22:47.139 [2024-07-20 15:58:21.717761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.139 [2024-07-20 15:58:21.722842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.139 [2024-07-20 15:58:21.722872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:47.139 [2024-07-20 15:58:21.722884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.057 ms 00:22:47.139 [2024-07-20 15:58:21.722895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.139 [2024-07-20 15:58:21.724414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.139 [2024-07-20 15:58:21.724448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:47.139 [2024-07-20 15:58:21.724460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:22:47.139 [2024-07-20 15:58:21.724469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.139 [2024-07-20 15:58:21.728096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.139 [2024-07-20 15:58:21.728142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:47.139 [2024-07-20 15:58:21.728154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.604 ms 00:22:47.139 [2024-07-20 15:58:21.728164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.139 [2024-07-20 15:58:21.840782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.139 [2024-07-20 15:58:21.840840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:47.139 [2024-07-20 15:58:21.840855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.770 ms 00:22:47.139 [2024-07-20 15:58:21.840865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.139 [2024-07-20 15:58:21.842938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.139 [2024-07-20 15:58:21.842971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:47.139 [2024-07-20 15:58:21.842982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:22:47.139 [2024-07-20 15:58:21.842992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.139 [2024-07-20 15:58:21.844412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.139 [2024-07-20 15:58:21.844442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:47.139 [2024-07-20 15:58:21.844453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:22:47.139 [2024-07-20 15:58:21.844463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.139 [2024-07-20 15:58:21.845559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.139 [2024-07-20 15:58:21.845590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:47.139 [2024-07-20 15:58:21.845601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:22:47.139 [2024-07-20 15:58:21.845610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.139 [2024-07-20 15:58:21.846736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.139 [2024-07-20 15:58:21.846769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:47.139 [2024-07-20 15:58:21.846780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:22:47.139 [2024-07-20 15:58:21.846788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.139 [2024-07-20 15:58:21.846814] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:47.139 [2024-07-20 15:58:21.846835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 100352 / 261120 wr_cnt: 1 state: open 00:22:47.139 [2024-07-20 15:58:21.846848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.846859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.846870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.846880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.846891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.846902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.846912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.846922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.846933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.846943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.846954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.846966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.846977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.846987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.846997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:47.139 [2024-07-20 15:58:21.847546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:47.140 [2024-07-20 15:58:21.847932] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:47.140 [2024-07-20 15:58:21.847942] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b9b94abc-a632-42a4-8e5c-1d0136f370ff 00:22:47.140 [2024-07-20 15:58:21.847953] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 100352 00:22:47.140 [2024-07-20 15:58:21.847973] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 101312 00:22:47.140 [2024-07-20 15:58:21.847994] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 100352 00:22:47.140 [2024-07-20 15:58:21.848004] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:22:47.140 [2024-07-20 15:58:21.848015] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:47.140 [2024-07-20 15:58:21.848025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:47.140 [2024-07-20 15:58:21.848034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:47.140 [2024-07-20 15:58:21.848043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:47.140 [2024-07-20 15:58:21.848054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:47.140 [2024-07-20 15:58:21.848064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.140 [2024-07-20 15:58:21.848073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:47.140 [2024-07-20 15:58:21.848087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:22:47.140 [2024-07-20 15:58:21.848096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.849782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.140 [2024-07-20 15:58:21.849803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:47.140 [2024-07-20 15:58:21.849814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:22:47.140 [2024-07-20 15:58:21.849823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.849941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.140 [2024-07-20 15:58:21.849953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:47.140 [2024-07-20 15:58:21.849964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:22:47.140 [2024-07-20 15:58:21.849973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.855932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.140 [2024-07-20 15:58:21.855953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:47.140 [2024-07-20 15:58:21.855964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.140 [2024-07-20 15:58:21.855973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.856020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.140 [2024-07-20 15:58:21.856030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:47.140 [2024-07-20 15:58:21.856039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.140 [2024-07-20 15:58:21.856048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.856100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.140 [2024-07-20 15:58:21.856112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:47.140 [2024-07-20 15:58:21.856121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.140 [2024-07-20 15:58:21.856130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.856144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.140 [2024-07-20 15:58:21.856158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:47.140 [2024-07-20 15:58:21.856173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.140 [2024-07-20 15:58:21.856182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.867272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.140 [2024-07-20 15:58:21.867317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:47.140 [2024-07-20 15:58:21.867330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.140 [2024-07-20 15:58:21.867341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.875472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.140 [2024-07-20 15:58:21.875506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:47.140 [2024-07-20 15:58:21.875517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.140 [2024-07-20 15:58:21.875543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.875599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.140 [2024-07-20 15:58:21.875610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:47.140 [2024-07-20 15:58:21.875620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.140 [2024-07-20 15:58:21.875629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.875652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.140 [2024-07-20 15:58:21.875662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:47.140 [2024-07-20 15:58:21.875678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.140 [2024-07-20 15:58:21.875687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.875755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.140 [2024-07-20 15:58:21.875767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:47.140 [2024-07-20 15:58:21.875776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.140 [2024-07-20 15:58:21.875786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.875817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.140 [2024-07-20 15:58:21.875829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:47.140 [2024-07-20 15:58:21.875838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.140 [2024-07-20 15:58:21.875851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.875885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.140 [2024-07-20 15:58:21.875896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:47.140 [2024-07-20 15:58:21.875906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.140 [2024-07-20 15:58:21.875914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.875967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:47.140 [2024-07-20 15:58:21.875984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:47.140 [2024-07-20 15:58:21.876002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:47.140 [2024-07-20 15:58:21.876017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.140 [2024-07-20 15:58:21.876122] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 197.700 ms, result 0 00:22:48.077 00:22:48.077 00:22:48.077 15:58:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:22:49.974 15:58:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:49.974 [2024-07-20 15:58:24.394043] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:49.974 [2024-07-20 15:58:24.394181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92698 ] 00:22:49.974 [2024-07-20 15:58:24.542097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.974 [2024-07-20 15:58:24.582349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.974 [2024-07-20 15:58:24.681721] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:49.974 [2024-07-20 15:58:24.681792] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:50.233 [2024-07-20 15:58:24.833382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.233 [2024-07-20 15:58:24.833427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:50.233 [2024-07-20 15:58:24.833441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:50.233 [2024-07-20 15:58:24.833451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.233 [2024-07-20 15:58:24.833507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.233 [2024-07-20 15:58:24.833520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:50.233 [2024-07-20 15:58:24.833531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:50.233 [2024-07-20 15:58:24.833544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.233 [2024-07-20 15:58:24.833572] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:50.233 [2024-07-20 15:58:24.833831] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:50.233 [2024-07-20 15:58:24.833950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.233 [2024-07-20 15:58:24.833963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:50.233 [2024-07-20 15:58:24.833977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:22:50.233 [2024-07-20 15:58:24.833987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.233 [2024-07-20 15:58:24.835391] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:50.233 [2024-07-20 15:58:24.837819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.233 [2024-07-20 15:58:24.837861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:50.233 [2024-07-20 15:58:24.837877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.432 ms 00:22:50.233 [2024-07-20 15:58:24.837887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.233 [2024-07-20 15:58:24.837941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.233 [2024-07-20 15:58:24.837953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:50.233 [2024-07-20 15:58:24.837964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:50.233 [2024-07-20 15:58:24.837974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.233 [2024-07-20 15:58:24.844501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.233 [2024-07-20 15:58:24.844538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:50.233 [2024-07-20 15:58:24.844549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.481 ms 00:22:50.233 [2024-07-20 15:58:24.844560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.233 [2024-07-20 15:58:24.844657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.233 [2024-07-20 15:58:24.844670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:50.233 [2024-07-20 15:58:24.844682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:50.233 [2024-07-20 15:58:24.844697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.233 [2024-07-20 15:58:24.844760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.233 [2024-07-20 15:58:24.844776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:50.233 [2024-07-20 15:58:24.844792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:50.234 [2024-07-20 15:58:24.844802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.234 [2024-07-20 15:58:24.844825] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:50.234 [2024-07-20 15:58:24.846427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.234 [2024-07-20 15:58:24.846449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:50.234 [2024-07-20 15:58:24.846460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.610 ms 00:22:50.234 [2024-07-20 15:58:24.846470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.234 [2024-07-20 15:58:24.846500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.234 [2024-07-20 15:58:24.846511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:50.234 [2024-07-20 15:58:24.846525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:50.234 [2024-07-20 15:58:24.846534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.234 [2024-07-20 15:58:24.846564] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:50.234 [2024-07-20 15:58:24.846593] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:50.234 [2024-07-20 15:58:24.846632] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:50.234 [2024-07-20 15:58:24.846659] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:50.234 [2024-07-20 15:58:24.846742] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:50.234 [2024-07-20 15:58:24.846759] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:50.234 [2024-07-20 15:58:24.846779] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:50.234 [2024-07-20 15:58:24.846792] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:50.234 [2024-07-20 15:58:24.846804] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:50.234 [2024-07-20 15:58:24.846814] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:50.234 [2024-07-20 15:58:24.846824] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:50.234 [2024-07-20 15:58:24.846834] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:50.234 [2024-07-20 15:58:24.846843] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:50.234 [2024-07-20 15:58:24.846854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.234 [2024-07-20 15:58:24.846863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:50.234 [2024-07-20 15:58:24.846873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:22:50.234 [2024-07-20 15:58:24.846886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.234 [2024-07-20 15:58:24.846952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.234 [2024-07-20 15:58:24.846965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:50.234 [2024-07-20 15:58:24.846975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:50.234 [2024-07-20 15:58:24.846984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.234 [2024-07-20 15:58:24.847065] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:50.234 [2024-07-20 15:58:24.847077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:50.234 [2024-07-20 15:58:24.847087] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.234 [2024-07-20 15:58:24.847098] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.234 [2024-07-20 15:58:24.847113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:50.234 [2024-07-20 15:58:24.847122] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:50.234 [2024-07-20 15:58:24.847131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:50.234 [2024-07-20 15:58:24.847141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:50.234 [2024-07-20 15:58:24.847150] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:50.234 [2024-07-20 15:58:24.847159] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.234 [2024-07-20 15:58:24.847168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:50.234 [2024-07-20 15:58:24.847180] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:50.234 [2024-07-20 15:58:24.847189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:50.234 [2024-07-20 15:58:24.847203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:50.234 [2024-07-20 15:58:24.847213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:50.234 [2024-07-20 15:58:24.847222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.234 [2024-07-20 15:58:24.847231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:50.234 [2024-07-20 15:58:24.847240] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:50.234 [2024-07-20 15:58:24.847249] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.234 [2024-07-20 15:58:24.847258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:50.234 [2024-07-20 15:58:24.847267] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:50.234 [2024-07-20 15:58:24.847276] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.234 [2024-07-20 15:58:24.847285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:50.234 [2024-07-20 15:58:24.847294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:50.234 [2024-07-20 15:58:24.847303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.234 [2024-07-20 15:58:24.847311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:50.234 [2024-07-20 15:58:24.847320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:50.234 [2024-07-20 15:58:24.847329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.234 [2024-07-20 15:58:24.847338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:50.234 [2024-07-20 15:58:24.847350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:50.234 [2024-07-20 15:58:24.847371] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:50.234 [2024-07-20 15:58:24.847381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:50.234 [2024-07-20 15:58:24.847390] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:50.234 [2024-07-20 15:58:24.847399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.234 [2024-07-20 15:58:24.847408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:50.234 [2024-07-20 15:58:24.847417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:50.234 [2024-07-20 15:58:24.847425] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:50.234 [2024-07-20 15:58:24.847434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:50.234 [2024-07-20 15:58:24.847444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:50.234 [2024-07-20 15:58:24.847452] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.234 [2024-07-20 15:58:24.847461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:50.234 [2024-07-20 15:58:24.847470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:50.234 [2024-07-20 15:58:24.847479] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.234 [2024-07-20 15:58:24.847488] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:50.234 [2024-07-20 15:58:24.847505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:50.234 [2024-07-20 15:58:24.847524] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:50.234 [2024-07-20 15:58:24.847534] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:50.234 [2024-07-20 15:58:24.847543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:50.234 [2024-07-20 15:58:24.847552] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:50.234 [2024-07-20 15:58:24.847561] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:50.234 [2024-07-20 15:58:24.847571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:50.234 [2024-07-20 15:58:24.847579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:50.234 [2024-07-20 15:58:24.847588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:50.234 [2024-07-20 15:58:24.847598] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:50.234 [2024-07-20 15:58:24.847610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.234 [2024-07-20 15:58:24.847621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:50.234 [2024-07-20 15:58:24.847632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:50.234 [2024-07-20 15:58:24.847642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:50.234 [2024-07-20 15:58:24.847652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:50.234 [2024-07-20 15:58:24.847662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:50.234 [2024-07-20 15:58:24.847672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:50.234 [2024-07-20 15:58:24.847684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:50.234 [2024-07-20 15:58:24.847695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:50.234 [2024-07-20 15:58:24.847705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:50.234 [2024-07-20 15:58:24.847715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:50.234 [2024-07-20 15:58:24.847725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:50.234 [2024-07-20 15:58:24.847735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:50.234 [2024-07-20 15:58:24.847744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:50.234 [2024-07-20 15:58:24.847754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:50.234 [2024-07-20 15:58:24.847764] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:50.234 [2024-07-20 15:58:24.847774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:50.234 [2024-07-20 15:58:24.847786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:50.234 [2024-07-20 15:58:24.847796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:50.234 [2024-07-20 15:58:24.847814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:50.235 [2024-07-20 15:58:24.847824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:50.235 [2024-07-20 15:58:24.847835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.847845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:50.235 [2024-07-20 15:58:24.847857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:22:50.235 [2024-07-20 15:58:24.847869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.871854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.872002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:50.235 [2024-07-20 15:58:24.872095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.976 ms 00:22:50.235 [2024-07-20 15:58:24.872137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.872262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.872302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:50.235 [2024-07-20 15:58:24.872337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:50.235 [2024-07-20 15:58:24.872467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.882684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.882802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:50.235 [2024-07-20 15:58:24.882940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.106 ms 00:22:50.235 [2024-07-20 15:58:24.882986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.883040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.883071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:50.235 [2024-07-20 15:58:24.883101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:50.235 [2024-07-20 15:58:24.883134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.883684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.883785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:50.235 [2024-07-20 15:58:24.883859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:22:50.235 [2024-07-20 15:58:24.883892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.884031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.884067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:50.235 [2024-07-20 15:58:24.884140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:22:50.235 [2024-07-20 15:58:24.884167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.889994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.890108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:50.235 [2024-07-20 15:58:24.890206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.792 ms 00:22:50.235 [2024-07-20 15:58:24.890241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.892857] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:50.235 [2024-07-20 15:58:24.893005] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:50.235 [2024-07-20 15:58:24.893104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.893136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:50.235 [2024-07-20 15:58:24.893170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.759 ms 00:22:50.235 [2024-07-20 15:58:24.893238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.905770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.905918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:50.235 [2024-07-20 15:58:24.906021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.493 ms 00:22:50.235 [2024-07-20 15:58:24.906057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.907735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.907853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:50.235 [2024-07-20 15:58:24.907871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:22:50.235 [2024-07-20 15:58:24.907882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.909290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.909320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:50.235 [2024-07-20 15:58:24.909331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.377 ms 00:22:50.235 [2024-07-20 15:58:24.909340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.909630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.909647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:50.235 [2024-07-20 15:58:24.909662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:22:50.235 [2024-07-20 15:58:24.909671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.929770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.929823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:50.235 [2024-07-20 15:58:24.929839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.107 ms 00:22:50.235 [2024-07-20 15:58:24.929865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.935976] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:50.235 [2024-07-20 15:58:24.938576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.938604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:50.235 [2024-07-20 15:58:24.938617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.680 ms 00:22:50.235 [2024-07-20 15:58:24.938627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.938675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.938688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:50.235 [2024-07-20 15:58:24.938698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:50.235 [2024-07-20 15:58:24.938711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.940285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.940322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:50.235 [2024-07-20 15:58:24.940339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:22:50.235 [2024-07-20 15:58:24.940372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.940399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.940410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:50.235 [2024-07-20 15:58:24.940421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:50.235 [2024-07-20 15:58:24.940430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.940467] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:50.235 [2024-07-20 15:58:24.940479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.940489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:50.235 [2024-07-20 15:58:24.940503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:50.235 [2024-07-20 15:58:24.940522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.944105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.944150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:50.235 [2024-07-20 15:58:24.944162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.571 ms 00:22:50.235 [2024-07-20 15:58:24.944172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.944239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.235 [2024-07-20 15:58:24.944259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:50.235 [2024-07-20 15:58:24.944270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:50.235 [2024-07-20 15:58:24.944283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.235 [2024-07-20 15:58:24.949822] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 115.664 ms, result 0 00:23:21.820  Copying: 1216/1048576 [kB] (1216 kBps) Copying: 8956/1048576 [kB] (7740 kBps) Copying: 42/1024 [MB] (33 MBps) Copying: 75/1024 [MB] (33 MBps) Copying: 110/1024 [MB] (34 MBps) Copying: 145/1024 [MB] (34 MBps) Copying: 179/1024 [MB] (34 MBps) Copying: 213/1024 [MB] (34 MBps) Copying: 248/1024 [MB] (34 MBps) Copying: 283/1024 [MB] (35 MBps) Copying: 319/1024 [MB] (36 MBps) Copying: 355/1024 [MB] (35 MBps) Copying: 390/1024 [MB] (35 MBps) Copying: 425/1024 [MB] (34 MBps) Copying: 460/1024 [MB] (35 MBps) Copying: 496/1024 [MB] (35 MBps) Copying: 531/1024 [MB] (35 MBps) Copying: 566/1024 [MB] (34 MBps) Copying: 602/1024 [MB] (35 MBps) Copying: 637/1024 [MB] (35 MBps) Copying: 671/1024 [MB] (34 MBps) Copying: 706/1024 [MB] (35 MBps) Copying: 740/1024 [MB] (34 MBps) Copying: 775/1024 [MB] (35 MBps) Copying: 810/1024 [MB] (34 MBps) Copying: 845/1024 [MB] (34 MBps) Copying: 879/1024 [MB] (34 MBps) Copying: 913/1024 [MB] (34 MBps) Copying: 947/1024 [MB] (33 MBps) Copying: 981/1024 [MB] (34 MBps) Copying: 1015/1024 [MB] (34 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-07-20 15:58:56.521772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.820 [2024-07-20 15:58:56.523043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:21.820 [2024-07-20 15:58:56.523115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:21.820 [2024-07-20 15:58:56.523151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.820 [2024-07-20 15:58:56.523244] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:21.820 [2024-07-20 15:58:56.524288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.820 [2024-07-20 15:58:56.524332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:21.820 [2024-07-20 15:58:56.524391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:23:21.820 [2024-07-20 15:58:56.524425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.820 [2024-07-20 15:58:56.524999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.820 [2024-07-20 15:58:56.525036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:21.820 [2024-07-20 15:58:56.525069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:23:21.820 [2024-07-20 15:58:56.525125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.820 [2024-07-20 15:58:56.546225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.820 [2024-07-20 15:58:56.546424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:21.820 [2024-07-20 15:58:56.546569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.084 ms 00:23:21.820 [2024-07-20 15:58:56.546621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.820 [2024-07-20 15:58:56.554077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.820 [2024-07-20 15:58:56.554253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:21.820 [2024-07-20 15:58:56.554422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.312 ms 00:23:21.820 [2024-07-20 15:58:56.554469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.820 [2024-07-20 15:58:56.556019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.820 [2024-07-20 15:58:56.556152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:21.820 [2024-07-20 15:58:56.556227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:23:21.820 [2024-07-20 15:58:56.556260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.820 [2024-07-20 15:58:56.560045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.820 [2024-07-20 15:58:56.560177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:21.820 [2024-07-20 15:58:56.560272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.737 ms 00:23:21.820 [2024-07-20 15:58:56.560306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.820 [2024-07-20 15:58:56.564256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.820 [2024-07-20 15:58:56.564394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:21.820 [2024-07-20 15:58:56.564474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.891 ms 00:23:21.820 [2024-07-20 15:58:56.564510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.820 [2024-07-20 15:58:56.566625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.820 [2024-07-20 15:58:56.566742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:21.820 [2024-07-20 15:58:56.566812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.066 ms 00:23:21.820 [2024-07-20 15:58:56.566846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.820 [2024-07-20 15:58:56.568280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.820 [2024-07-20 15:58:56.568406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:21.820 [2024-07-20 15:58:56.568499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:23:21.820 [2024-07-20 15:58:56.568533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.820 [2024-07-20 15:58:56.569775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.820 [2024-07-20 15:58:56.569888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:21.820 [2024-07-20 15:58:56.569979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.192 ms 00:23:21.820 [2024-07-20 15:58:56.570012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.820 [2024-07-20 15:58:56.571232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.820 [2024-07-20 15:58:56.571347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:21.820 [2024-07-20 15:58:56.571451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:23:21.820 [2024-07-20 15:58:56.571466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.820 [2024-07-20 15:58:56.571524] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:21.820 [2024-07-20 15:58:56.571542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:21.820 [2024-07-20 15:58:56.571554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:23:21.820 [2024-07-20 15:58:56.571565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:21.820 [2024-07-20 15:58:56.571929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.571940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.571950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.571960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.571971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.571981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.571991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:21.821 [2024-07-20 15:58:56.572617] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:21.821 [2024-07-20 15:58:56.572630] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b9b94abc-a632-42a4-8e5c-1d0136f370ff 00:23:21.821 [2024-07-20 15:58:56.572648] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:23:21.821 [2024-07-20 15:58:56.572664] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 166336 00:23:21.821 [2024-07-20 15:58:56.572673] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 164352 00:23:21.821 [2024-07-20 15:58:56.572684] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0121 00:23:21.821 [2024-07-20 15:58:56.572694] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:21.821 [2024-07-20 15:58:56.572704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:21.821 [2024-07-20 15:58:56.572714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:21.821 [2024-07-20 15:58:56.572723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:21.821 [2024-07-20 15:58:56.572732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:21.821 [2024-07-20 15:58:56.572741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.821 [2024-07-20 15:58:56.572752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:21.821 [2024-07-20 15:58:56.572762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:23:21.821 [2024-07-20 15:58:56.572771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.821 [2024-07-20 15:58:56.574784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.821 [2024-07-20 15:58:56.574894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:21.821 [2024-07-20 15:58:56.574987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.994 ms 00:23:21.821 [2024-07-20 15:58:56.575021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.821 [2024-07-20 15:58:56.575164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.821 [2024-07-20 15:58:56.575204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:21.821 [2024-07-20 15:58:56.575269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:21.821 [2024-07-20 15:58:56.575311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.821 [2024-07-20 15:58:56.581396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.821 [2024-07-20 15:58:56.581503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:21.821 [2024-07-20 15:58:56.581572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.821 [2024-07-20 15:58:56.581604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.821 [2024-07-20 15:58:56.581668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.821 [2024-07-20 15:58:56.581778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:21.821 [2024-07-20 15:58:56.581823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.821 [2024-07-20 15:58:56.581851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.821 [2024-07-20 15:58:56.581919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.821 [2024-07-20 15:58:56.581953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:21.821 [2024-07-20 15:58:56.581980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.821 [2024-07-20 15:58:56.582007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.821 [2024-07-20 15:58:56.582044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.821 [2024-07-20 15:58:56.582106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:21.821 [2024-07-20 15:58:56.582149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.821 [2024-07-20 15:58:56.582206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.821 [2024-07-20 15:58:56.593502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.821 [2024-07-20 15:58:56.593649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:21.822 [2024-07-20 15:58:56.593717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.822 [2024-07-20 15:58:56.593751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.822 [2024-07-20 15:58:56.601919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.822 [2024-07-20 15:58:56.602064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:21.822 [2024-07-20 15:58:56.602185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.822 [2024-07-20 15:58:56.602220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.822 [2024-07-20 15:58:56.602345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.822 [2024-07-20 15:58:56.602453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:21.822 [2024-07-20 15:58:56.602533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.822 [2024-07-20 15:58:56.602565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.822 [2024-07-20 15:58:56.602616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.822 [2024-07-20 15:58:56.602680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:21.822 [2024-07-20 15:58:56.602763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.822 [2024-07-20 15:58:56.602795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.822 [2024-07-20 15:58:56.602946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.822 [2024-07-20 15:58:56.603065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:21.822 [2024-07-20 15:58:56.603156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.822 [2024-07-20 15:58:56.603188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.822 [2024-07-20 15:58:56.603293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.822 [2024-07-20 15:58:56.603306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:21.822 [2024-07-20 15:58:56.603317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.822 [2024-07-20 15:58:56.603326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.822 [2024-07-20 15:58:56.603388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.822 [2024-07-20 15:58:56.603416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:21.822 [2024-07-20 15:58:56.603426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.822 [2024-07-20 15:58:56.603447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.822 [2024-07-20 15:58:56.603488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.822 [2024-07-20 15:58:56.603510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:21.822 [2024-07-20 15:58:56.603520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.822 [2024-07-20 15:58:56.603529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.822 [2024-07-20 15:58:56.603652] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 82.022 ms, result 0 00:23:22.082 00:23:22.082 00:23:22.082 15:58:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:23.990 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:23.990 15:58:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:23.990 [2024-07-20 15:58:58.571041] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:23.990 [2024-07-20 15:58:58.571164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93046 ] 00:23:23.990 [2024-07-20 15:58:58.722203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.990 [2024-07-20 15:58:58.774196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.251 [2024-07-20 15:58:58.876995] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:24.251 [2024-07-20 15:58:58.877067] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:24.251 [2024-07-20 15:58:59.028886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-07-20 15:58:59.028932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:24.251 [2024-07-20 15:58:59.028947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:24.251 [2024-07-20 15:58:59.028956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-07-20 15:58:59.029007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-07-20 15:58:59.029018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:24.251 [2024-07-20 15:58:59.029028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:24.251 [2024-07-20 15:58:59.029041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-07-20 15:58:59.029060] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:24.251 [2024-07-20 15:58:59.029267] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:24.251 [2024-07-20 15:58:59.029285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-07-20 15:58:59.029300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:24.251 [2024-07-20 15:58:59.029317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:23:24.251 [2024-07-20 15:58:59.029333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-07-20 15:58:59.030805] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:24.251 [2024-07-20 15:58:59.033300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-07-20 15:58:59.033336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:24.251 [2024-07-20 15:58:59.033370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.500 ms 00:23:24.251 [2024-07-20 15:58:59.033392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-07-20 15:58:59.033465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-07-20 15:58:59.033477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:24.251 [2024-07-20 15:58:59.033490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:24.251 [2024-07-20 15:58:59.033513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-07-20 15:58:59.040324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-07-20 15:58:59.040374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:24.251 [2024-07-20 15:58:59.040386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.759 ms 00:23:24.251 [2024-07-20 15:58:59.040396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-07-20 15:58:59.040485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-07-20 15:58:59.040498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:24.251 [2024-07-20 15:58:59.040521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:24.251 [2024-07-20 15:58:59.040531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-07-20 15:58:59.040615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-07-20 15:58:59.040626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:24.251 [2024-07-20 15:58:59.040643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:24.251 [2024-07-20 15:58:59.040653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-07-20 15:58:59.040677] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:24.251 [2024-07-20 15:58:59.042310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-07-20 15:58:59.042338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:24.251 [2024-07-20 15:58:59.042349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.642 ms 00:23:24.251 [2024-07-20 15:58:59.042373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-07-20 15:58:59.042405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-07-20 15:58:59.042416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:24.251 [2024-07-20 15:58:59.042430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:24.251 [2024-07-20 15:58:59.042440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-07-20 15:58:59.042468] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:24.251 [2024-07-20 15:58:59.042494] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:24.251 [2024-07-20 15:58:59.042539] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:24.251 [2024-07-20 15:58:59.042558] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:24.251 [2024-07-20 15:58:59.042639] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:24.251 [2024-07-20 15:58:59.042662] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:24.251 [2024-07-20 15:58:59.042674] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:24.251 [2024-07-20 15:58:59.042687] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:24.251 [2024-07-20 15:58:59.042699] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:24.251 [2024-07-20 15:58:59.042710] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:24.251 [2024-07-20 15:58:59.042720] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:24.251 [2024-07-20 15:58:59.042729] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:24.251 [2024-07-20 15:58:59.042739] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:24.251 [2024-07-20 15:58:59.042749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-07-20 15:58:59.042759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:24.251 [2024-07-20 15:58:59.042769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:23:24.251 [2024-07-20 15:58:59.042790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-07-20 15:58:59.042866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-07-20 15:58:59.042884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:24.251 [2024-07-20 15:58:59.042894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:24.251 [2024-07-20 15:58:59.042910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-07-20 15:58:59.042999] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:24.251 [2024-07-20 15:58:59.043010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:24.251 [2024-07-20 15:58:59.043022] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:24.251 [2024-07-20 15:58:59.043032] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.251 [2024-07-20 15:58:59.043046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:24.251 [2024-07-20 15:58:59.043055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:24.251 [2024-07-20 15:58:59.043065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:24.251 [2024-07-20 15:58:59.043076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:24.251 [2024-07-20 15:58:59.043085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:24.251 [2024-07-20 15:58:59.043094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:24.251 [2024-07-20 15:58:59.043104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:24.251 [2024-07-20 15:58:59.043113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:24.251 [2024-07-20 15:58:59.043125] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:24.251 [2024-07-20 15:58:59.043134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:24.251 [2024-07-20 15:58:59.043143] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:24.251 [2024-07-20 15:58:59.043152] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.252 [2024-07-20 15:58:59.043161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:24.252 [2024-07-20 15:58:59.043170] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:24.252 [2024-07-20 15:58:59.043178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.252 [2024-07-20 15:58:59.043187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:24.252 [2024-07-20 15:58:59.043197] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:24.252 [2024-07-20 15:58:59.043206] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.252 [2024-07-20 15:58:59.043215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:24.252 [2024-07-20 15:58:59.043224] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:24.252 [2024-07-20 15:58:59.043234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.252 [2024-07-20 15:58:59.043243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:24.252 [2024-07-20 15:58:59.043252] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:24.252 [2024-07-20 15:58:59.043260] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.252 [2024-07-20 15:58:59.043275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:24.252 [2024-07-20 15:58:59.043284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:24.252 [2024-07-20 15:58:59.043292] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.252 [2024-07-20 15:58:59.043301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:24.252 [2024-07-20 15:58:59.043310] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:24.252 [2024-07-20 15:58:59.043319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:24.252 [2024-07-20 15:58:59.043328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:24.252 [2024-07-20 15:58:59.043337] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:24.252 [2024-07-20 15:58:59.043346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:24.252 [2024-07-20 15:58:59.043365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:24.252 [2024-07-20 15:58:59.043374] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:24.252 [2024-07-20 15:58:59.043384] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.252 [2024-07-20 15:58:59.043393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:24.252 [2024-07-20 15:58:59.043402] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:24.252 [2024-07-20 15:58:59.043411] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.252 [2024-07-20 15:58:59.043420] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:24.252 [2024-07-20 15:58:59.043440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:24.252 [2024-07-20 15:58:59.043450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:24.252 [2024-07-20 15:58:59.043460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.252 [2024-07-20 15:58:59.043470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:24.252 [2024-07-20 15:58:59.043479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:24.252 [2024-07-20 15:58:59.043488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:24.252 [2024-07-20 15:58:59.043498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:24.252 [2024-07-20 15:58:59.043507] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:24.252 [2024-07-20 15:58:59.043516] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:24.252 [2024-07-20 15:58:59.043526] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:24.252 [2024-07-20 15:58:59.043538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:24.252 [2024-07-20 15:58:59.043549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:24.252 [2024-07-20 15:58:59.043559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:24.252 [2024-07-20 15:58:59.043569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:24.252 [2024-07-20 15:58:59.043579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:24.252 [2024-07-20 15:58:59.043589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:24.252 [2024-07-20 15:58:59.043602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:24.252 [2024-07-20 15:58:59.043612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:24.252 [2024-07-20 15:58:59.043622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:24.252 [2024-07-20 15:58:59.043633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:24.252 [2024-07-20 15:58:59.043643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:24.252 [2024-07-20 15:58:59.043653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:24.252 [2024-07-20 15:58:59.043665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:24.252 [2024-07-20 15:58:59.043675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:24.252 [2024-07-20 15:58:59.043685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:24.252 [2024-07-20 15:58:59.043695] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:24.252 [2024-07-20 15:58:59.043705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:24.252 [2024-07-20 15:58:59.043717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:24.252 [2024-07-20 15:58:59.043727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:24.252 [2024-07-20 15:58:59.043745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:24.252 [2024-07-20 15:58:59.043756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:24.252 [2024-07-20 15:58:59.043767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.252 [2024-07-20 15:58:59.043780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:24.252 [2024-07-20 15:58:59.043790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:23:24.252 [2024-07-20 15:58:59.043804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.512 [2024-07-20 15:58:59.065916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.512 [2024-07-20 15:58:59.066049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:24.512 [2024-07-20 15:58:59.066230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.092 ms 00:23:24.512 [2024-07-20 15:58:59.066274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.512 [2024-07-20 15:58:59.066411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.512 [2024-07-20 15:58:59.066552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:24.512 [2024-07-20 15:58:59.066606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:23:24.512 [2024-07-20 15:58:59.066641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.512 [2024-07-20 15:58:59.077560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.077705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:24.513 [2024-07-20 15:58:59.077801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.841 ms 00:23:24.513 [2024-07-20 15:58:59.077847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.077900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.077932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:24.513 [2024-07-20 15:58:59.077960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:24.513 [2024-07-20 15:58:59.077993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.078559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.078660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:24.513 [2024-07-20 15:58:59.078775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:23:24.513 [2024-07-20 15:58:59.078812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.078952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.079017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:24.513 [2024-07-20 15:58:59.079072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:24.513 [2024-07-20 15:58:59.079101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.085021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.085152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:24.513 [2024-07-20 15:58:59.085308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.885 ms 00:23:24.513 [2024-07-20 15:58:59.085343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.087977] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:24.513 [2024-07-20 15:58:59.088132] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:24.513 [2024-07-20 15:58:59.088227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.088263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:24.513 [2024-07-20 15:58:59.088293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:23:24.513 [2024-07-20 15:58:59.088320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.100667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.100784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:24.513 [2024-07-20 15:58:59.100912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.151 ms 00:23:24.513 [2024-07-20 15:58:59.100948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.102616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.102648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:24.513 [2024-07-20 15:58:59.102660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:23:24.513 [2024-07-20 15:58:59.102669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.104094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.104127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:24.513 [2024-07-20 15:58:59.104139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:23:24.513 [2024-07-20 15:58:59.104148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.104440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.104457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:24.513 [2024-07-20 15:58:59.104469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:23:24.513 [2024-07-20 15:58:59.104479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.124203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.124259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:24.513 [2024-07-20 15:58:59.124275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.732 ms 00:23:24.513 [2024-07-20 15:58:59.124301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.130319] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:24.513 [2024-07-20 15:58:59.132783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.132810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:24.513 [2024-07-20 15:58:59.132823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.453 ms 00:23:24.513 [2024-07-20 15:58:59.132832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.132882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.132896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:24.513 [2024-07-20 15:58:59.132907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:24.513 [2024-07-20 15:58:59.132917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.133810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.133838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:24.513 [2024-07-20 15:58:59.133853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:23:24.513 [2024-07-20 15:58:59.133862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.133886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.133896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:24.513 [2024-07-20 15:58:59.133905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:24.513 [2024-07-20 15:58:59.133915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.133948] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:24.513 [2024-07-20 15:58:59.133959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.133968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:24.513 [2024-07-20 15:58:59.133997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:24.513 [2024-07-20 15:58:59.134006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.137615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.137658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:24.513 [2024-07-20 15:58:59.137670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.598 ms 00:23:24.513 [2024-07-20 15:58:59.137680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.137743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.513 [2024-07-20 15:58:59.137755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:24.513 [2024-07-20 15:58:59.137774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:24.513 [2024-07-20 15:58:59.137788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.513 [2024-07-20 15:58:59.138858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 109.726 ms, result 0 00:24:02.076  Copying: 29/1024 [MB] (29 MBps) Copying: 56/1024 [MB] (27 MBps) Copying: 84/1024 [MB] (27 MBps) Copying: 112/1024 [MB] (27 MBps) Copying: 140/1024 [MB] (27 MBps) Copying: 167/1024 [MB] (27 MBps) Copying: 195/1024 [MB] (27 MBps) Copying: 223/1024 [MB] (27 MBps) Copying: 251/1024 [MB] (27 MBps) Copying: 278/1024 [MB] (27 MBps) Copying: 306/1024 [MB] (27 MBps) Copying: 334/1024 [MB] (27 MBps) Copying: 361/1024 [MB] (27 MBps) Copying: 389/1024 [MB] (27 MBps) Copying: 416/1024 [MB] (27 MBps) Copying: 444/1024 [MB] (27 MBps) Copying: 471/1024 [MB] (27 MBps) Copying: 498/1024 [MB] (27 MBps) Copying: 525/1024 [MB] (27 MBps) Copying: 553/1024 [MB] (27 MBps) Copying: 581/1024 [MB] (28 MBps) Copying: 609/1024 [MB] (27 MBps) Copying: 637/1024 [MB] (27 MBps) Copying: 664/1024 [MB] (27 MBps) Copying: 691/1024 [MB] (27 MBps) Copying: 719/1024 [MB] (27 MBps) Copying: 746/1024 [MB] (27 MBps) Copying: 773/1024 [MB] (27 MBps) Copying: 800/1024 [MB] (27 MBps) Copying: 827/1024 [MB] (27 MBps) Copying: 855/1024 [MB] (27 MBps) Copying: 882/1024 [MB] (26 MBps) Copying: 909/1024 [MB] (27 MBps) Copying: 936/1024 [MB] (26 MBps) Copying: 963/1024 [MB] (26 MBps) Copying: 990/1024 [MB] (27 MBps) Copying: 1017/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-20 15:59:36.631925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.076 [2024-07-20 15:59:36.632017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:02.076 [2024-07-20 15:59:36.632052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:02.076 [2024-07-20 15:59:36.632069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.076 [2024-07-20 15:59:36.632107] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:02.076 [2024-07-20 15:59:36.633116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.076 [2024-07-20 15:59:36.633153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:02.076 [2024-07-20 15:59:36.633172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:24:02.076 [2024-07-20 15:59:36.633188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.076 [2024-07-20 15:59:36.633510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.076 [2024-07-20 15:59:36.633531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:02.076 [2024-07-20 15:59:36.633549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:24:02.076 [2024-07-20 15:59:36.633573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.076 [2024-07-20 15:59:36.638808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.076 [2024-07-20 15:59:36.638853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:02.076 [2024-07-20 15:59:36.638873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.216 ms 00:24:02.076 [2024-07-20 15:59:36.638890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.076 [2024-07-20 15:59:36.645298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.076 [2024-07-20 15:59:36.645338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:02.076 [2024-07-20 15:59:36.645367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.383 ms 00:24:02.076 [2024-07-20 15:59:36.645389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.076 [2024-07-20 15:59:36.647294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.076 [2024-07-20 15:59:36.647345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:02.076 [2024-07-20 15:59:36.647375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.804 ms 00:24:02.076 [2024-07-20 15:59:36.647387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.076 [2024-07-20 15:59:36.651751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.076 [2024-07-20 15:59:36.651807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:02.076 [2024-07-20 15:59:36.651820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.331 ms 00:24:02.076 [2024-07-20 15:59:36.651846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.076 [2024-07-20 15:59:36.656104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.076 [2024-07-20 15:59:36.656147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:02.076 [2024-07-20 15:59:36.656161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.225 ms 00:24:02.076 [2024-07-20 15:59:36.656178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.076 [2024-07-20 15:59:36.658479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.076 [2024-07-20 15:59:36.658516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:02.076 [2024-07-20 15:59:36.658529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.285 ms 00:24:02.076 [2024-07-20 15:59:36.658539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.076 [2024-07-20 15:59:36.660382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.076 [2024-07-20 15:59:36.660418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:02.076 [2024-07-20 15:59:36.660429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.814 ms 00:24:02.076 [2024-07-20 15:59:36.660439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.076 [2024-07-20 15:59:36.661850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.076 [2024-07-20 15:59:36.661886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:02.076 [2024-07-20 15:59:36.661898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:24:02.076 [2024-07-20 15:59:36.661907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.076 [2024-07-20 15:59:36.663246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.076 [2024-07-20 15:59:36.663290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:02.076 [2024-07-20 15:59:36.663302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:24:02.076 [2024-07-20 15:59:36.663311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.076 [2024-07-20 15:59:36.663340] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:02.076 [2024-07-20 15:59:36.663373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:02.076 [2024-07-20 15:59:36.663387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:24:02.076 [2024-07-20 15:59:36.663399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.663993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:02.076 [2024-07-20 15:59:36.664235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:02.077 [2024-07-20 15:59:36.664490] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:02.077 [2024-07-20 15:59:36.664499] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b9b94abc-a632-42a4-8e5c-1d0136f370ff 00:24:02.077 [2024-07-20 15:59:36.664510] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:24:02.077 [2024-07-20 15:59:36.664520] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:02.077 [2024-07-20 15:59:36.664529] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:02.077 [2024-07-20 15:59:36.664540] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:02.077 [2024-07-20 15:59:36.664558] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:02.077 [2024-07-20 15:59:36.664569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:02.077 [2024-07-20 15:59:36.664582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:02.077 [2024-07-20 15:59:36.664591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:02.077 [2024-07-20 15:59:36.664600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:02.077 [2024-07-20 15:59:36.664610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.077 [2024-07-20 15:59:36.664620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:02.077 [2024-07-20 15:59:36.664630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:24:02.077 [2024-07-20 15:59:36.664640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.666799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.077 [2024-07-20 15:59:36.666823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:02.077 [2024-07-20 15:59:36.666834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.144 ms 00:24:02.077 [2024-07-20 15:59:36.666844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.666974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.077 [2024-07-20 15:59:36.666985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:02.077 [2024-07-20 15:59:36.666995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:24:02.077 [2024-07-20 15:59:36.667005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.674237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.077 [2024-07-20 15:59:36.674266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:02.077 [2024-07-20 15:59:36.674287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.077 [2024-07-20 15:59:36.674297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.674341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.077 [2024-07-20 15:59:36.674387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:02.077 [2024-07-20 15:59:36.674414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.077 [2024-07-20 15:59:36.674424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.674466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.077 [2024-07-20 15:59:36.674479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:02.077 [2024-07-20 15:59:36.674489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.077 [2024-07-20 15:59:36.674502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.674518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.077 [2024-07-20 15:59:36.674529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:02.077 [2024-07-20 15:59:36.674539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.077 [2024-07-20 15:59:36.674550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.688062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.077 [2024-07-20 15:59:36.688103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.077 [2024-07-20 15:59:36.688116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.077 [2024-07-20 15:59:36.688134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.697520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.077 [2024-07-20 15:59:36.697554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:02.077 [2024-07-20 15:59:36.697566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.077 [2024-07-20 15:59:36.697592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.697644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.077 [2024-07-20 15:59:36.697656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:02.077 [2024-07-20 15:59:36.697666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.077 [2024-07-20 15:59:36.697677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.697716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.077 [2024-07-20 15:59:36.697734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:02.077 [2024-07-20 15:59:36.697744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.077 [2024-07-20 15:59:36.697753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.697826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.077 [2024-07-20 15:59:36.697844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:02.077 [2024-07-20 15:59:36.697854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.077 [2024-07-20 15:59:36.697864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.697901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.077 [2024-07-20 15:59:36.697917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:02.077 [2024-07-20 15:59:36.697927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.077 [2024-07-20 15:59:36.697936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.697975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.077 [2024-07-20 15:59:36.697986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:02.077 [2024-07-20 15:59:36.697995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.077 [2024-07-20 15:59:36.698005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.698048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.077 [2024-07-20 15:59:36.698060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:02.077 [2024-07-20 15:59:36.698069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.077 [2024-07-20 15:59:36.698086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.077 [2024-07-20 15:59:36.698253] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.382 ms, result 0 00:24:02.335 00:24:02.335 00:24:02.335 15:59:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:24:04.236 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:24:04.236 15:59:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:24:04.236 15:59:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:24:04.236 15:59:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:04.236 15:59:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:04.236 15:59:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:04.236 15:59:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:04.236 15:59:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:24:04.236 Process with pid 91384 is not found 00:24:04.236 15:59:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 91384 00:24:04.236 15:59:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@946 -- # '[' -z 91384 ']' 00:24:04.236 15:59:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # kill -0 91384 00:24:04.236 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (91384) - No such process 00:24:04.236 15:59:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@973 -- # echo 'Process with pid 91384 is not found' 00:24:04.236 15:59:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:24:04.494 Remove shared memory files 00:24:04.494 15:59:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:24:04.494 15:59:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:04.494 15:59:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:24:04.494 15:59:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:24:04.494 15:59:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:24:04.494 15:59:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:04.494 15:59:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:24:04.494 ************************************ 00:24:04.494 END TEST ftl_dirty_shutdown 00:24:04.494 ************************************ 00:24:04.494 00:24:04.494 real 3m15.988s 00:24:04.494 user 3m42.408s 00:24:04.494 sys 0m34.138s 00:24:04.494 15:59:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:04.494 15:59:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:04.494 15:59:39 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:24:04.494 15:59:39 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:24:04.494 15:59:39 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:04.495 15:59:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:04.495 ************************************ 00:24:04.495 START TEST ftl_upgrade_shutdown 00:24:04.495 ************************************ 00:24:04.495 15:59:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:24:04.754 * Looking for test storage... 00:24:04.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:24:04.754 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93530 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93530 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 93530 ']' 00:24:04.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:04.755 15:59:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:04.755 [2024-07-20 15:59:39.526430] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:04.755 [2024-07-20 15:59:39.526554] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93530 ] 00:24:05.014 [2024-07-20 15:59:39.678391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.014 [2024-07-20 15:59:39.719460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:05.582 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:24:05.841 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:24:05.841 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:05.841 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:24:05.841 15:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=basen1 00:24:05.841 15:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:05.841 15:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:24:05.841 15:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:24:05.841 15:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:24:06.100 15:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:06.100 { 00:24:06.100 "name": "basen1", 00:24:06.100 "aliases": [ 00:24:06.100 "596ff3ab-3f88-44de-b784-26975c85aca8" 00:24:06.100 ], 00:24:06.100 "product_name": "NVMe disk", 00:24:06.100 "block_size": 4096, 00:24:06.100 "num_blocks": 1310720, 00:24:06.100 "uuid": "596ff3ab-3f88-44de-b784-26975c85aca8", 00:24:06.100 "assigned_rate_limits": { 00:24:06.100 "rw_ios_per_sec": 0, 00:24:06.100 "rw_mbytes_per_sec": 0, 00:24:06.100 "r_mbytes_per_sec": 0, 00:24:06.100 "w_mbytes_per_sec": 0 00:24:06.100 }, 00:24:06.100 "claimed": true, 00:24:06.100 "claim_type": "read_many_write_one", 00:24:06.100 "zoned": false, 00:24:06.100 "supported_io_types": { 00:24:06.100 "read": true, 00:24:06.100 "write": true, 00:24:06.100 "unmap": true, 00:24:06.100 "write_zeroes": true, 00:24:06.100 "flush": true, 00:24:06.100 "reset": true, 00:24:06.100 "compare": true, 00:24:06.100 "compare_and_write": false, 00:24:06.100 "abort": true, 00:24:06.100 "nvme_admin": true, 00:24:06.100 "nvme_io": true 00:24:06.100 }, 00:24:06.100 "driver_specific": { 00:24:06.100 "nvme": [ 00:24:06.100 { 00:24:06.100 "pci_address": "0000:00:11.0", 00:24:06.100 "trid": { 00:24:06.100 "trtype": "PCIe", 00:24:06.100 "traddr": "0000:00:11.0" 00:24:06.100 }, 00:24:06.100 "ctrlr_data": { 00:24:06.100 "cntlid": 0, 00:24:06.100 "vendor_id": "0x1b36", 00:24:06.100 "model_number": "QEMU NVMe Ctrl", 00:24:06.100 "serial_number": "12341", 00:24:06.100 "firmware_revision": "8.0.0", 00:24:06.100 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:06.100 "oacs": { 00:24:06.100 "security": 0, 00:24:06.100 "format": 1, 00:24:06.100 "firmware": 0, 00:24:06.100 "ns_manage": 1 00:24:06.100 }, 00:24:06.100 "multi_ctrlr": false, 00:24:06.100 "ana_reporting": false 00:24:06.100 }, 00:24:06.100 "vs": { 00:24:06.101 "nvme_version": "1.4" 00:24:06.101 }, 00:24:06.101 "ns_data": { 00:24:06.101 "id": 1, 00:24:06.101 "can_share": false 00:24:06.101 } 00:24:06.101 } 00:24:06.101 ], 00:24:06.101 "mp_policy": "active_passive" 00:24:06.101 } 00:24:06.101 } 00:24:06.101 ]' 00:24:06.101 15:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:06.101 15:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:24:06.101 15:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:06.101 15:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:24:06.101 15:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:24:06.101 15:59:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:24:06.101 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:06.101 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:24:06.101 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:06.101 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:06.101 15:59:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:06.360 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=4e598205-c787-430f-8b33-bf0c3ba5b669 00:24:06.360 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:06.360 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4e598205-c787-430f-8b33-bf0c3ba5b669 00:24:06.619 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:24:06.619 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=42a28e76-fa6e-4471-9742-9ca3f8351f3e 00:24:06.619 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 42a28e76-fa6e-4471-9742-9ca3f8351f3e 00:24:06.878 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=1993c893-40b1-4158-b057-c3175848b084 00:24:06.878 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 1993c893-40b1-4158-b057-c3175848b084 ]] 00:24:06.878 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 1993c893-40b1-4158-b057-c3175848b084 5120 00:24:06.878 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:24:06.878 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:06.878 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=1993c893-40b1-4158-b057-c3175848b084 00:24:06.878 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:24:06.878 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 1993c893-40b1-4158-b057-c3175848b084 00:24:06.878 15:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=1993c893-40b1-4158-b057-c3175848b084 00:24:06.878 15:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:06.878 15:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:24:06.878 15:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:24:06.878 15:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1993c893-40b1-4158-b057-c3175848b084 00:24:07.137 15:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:07.137 { 00:24:07.137 "name": "1993c893-40b1-4158-b057-c3175848b084", 00:24:07.137 "aliases": [ 00:24:07.137 "lvs/basen1p0" 00:24:07.137 ], 00:24:07.137 "product_name": "Logical Volume", 00:24:07.137 "block_size": 4096, 00:24:07.137 "num_blocks": 5242880, 00:24:07.137 "uuid": "1993c893-40b1-4158-b057-c3175848b084", 00:24:07.137 "assigned_rate_limits": { 00:24:07.137 "rw_ios_per_sec": 0, 00:24:07.137 "rw_mbytes_per_sec": 0, 00:24:07.137 "r_mbytes_per_sec": 0, 00:24:07.137 "w_mbytes_per_sec": 0 00:24:07.137 }, 00:24:07.137 "claimed": false, 00:24:07.137 "zoned": false, 00:24:07.137 "supported_io_types": { 00:24:07.137 "read": true, 00:24:07.137 "write": true, 00:24:07.137 "unmap": true, 00:24:07.137 "write_zeroes": true, 00:24:07.137 "flush": false, 00:24:07.137 "reset": true, 00:24:07.137 "compare": false, 00:24:07.137 "compare_and_write": false, 00:24:07.137 "abort": false, 00:24:07.137 "nvme_admin": false, 00:24:07.137 "nvme_io": false 00:24:07.137 }, 00:24:07.137 "driver_specific": { 00:24:07.137 "lvol": { 00:24:07.137 "lvol_store_uuid": "42a28e76-fa6e-4471-9742-9ca3f8351f3e", 00:24:07.137 "base_bdev": "basen1", 00:24:07.137 "thin_provision": true, 00:24:07.137 "num_allocated_clusters": 0, 00:24:07.137 "snapshot": false, 00:24:07.137 "clone": false, 00:24:07.137 "esnap_clone": false 00:24:07.137 } 00:24:07.137 } 00:24:07.137 } 00:24:07.137 ]' 00:24:07.137 15:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:07.137 15:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:24:07.137 15:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:07.137 15:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=5242880 00:24:07.137 15:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=20480 00:24:07.137 15:59:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 20480 00:24:07.137 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:24:07.137 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:07.137 15:59:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:24:07.396 15:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:24:07.396 15:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:24:07.396 15:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:24:07.656 15:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:24:07.656 15:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:24:07.656 15:59:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 1993c893-40b1-4158-b057-c3175848b084 -c cachen1p0 --l2p_dram_limit 2 00:24:07.656 [2024-07-20 15:59:42.415756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:07.656 [2024-07-20 15:59:42.415807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:07.656 [2024-07-20 15:59:42.415825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:07.656 [2024-07-20 15:59:42.415852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:07.656 [2024-07-20 15:59:42.415918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:07.656 [2024-07-20 15:59:42.415931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:07.656 [2024-07-20 15:59:42.415943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:24:07.656 [2024-07-20 15:59:42.415955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:07.656 [2024-07-20 15:59:42.415981] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:07.656 [2024-07-20 15:59:42.416250] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:07.656 [2024-07-20 15:59:42.416271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:07.656 [2024-07-20 15:59:42.416283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:07.656 [2024-07-20 15:59:42.416296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.297 ms 00:24:07.656 [2024-07-20 15:59:42.416306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:07.656 [2024-07-20 15:59:42.416448] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 5b27914f-95a9-438a-8e33-df91801c691f 00:24:07.656 [2024-07-20 15:59:42.417837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:07.656 [2024-07-20 15:59:42.417865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:24:07.656 [2024-07-20 15:59:42.417877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:24:07.656 [2024-07-20 15:59:42.417900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:07.656 [2024-07-20 15:59:42.425371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:07.656 [2024-07-20 15:59:42.425418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:07.656 [2024-07-20 15:59:42.425431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.421 ms 00:24:07.656 [2024-07-20 15:59:42.425443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:07.656 [2024-07-20 15:59:42.425492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:07.656 [2024-07-20 15:59:42.425515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:07.656 [2024-07-20 15:59:42.425526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:24:07.656 [2024-07-20 15:59:42.425538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:07.656 [2024-07-20 15:59:42.425586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:07.656 [2024-07-20 15:59:42.425599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:07.656 [2024-07-20 15:59:42.425609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:24:07.656 [2024-07-20 15:59:42.425621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:07.656 [2024-07-20 15:59:42.425645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:07.656 [2024-07-20 15:59:42.427468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:07.656 [2024-07-20 15:59:42.427506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:07.656 [2024-07-20 15:59:42.427521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.829 ms 00:24:07.656 [2024-07-20 15:59:42.427530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:07.656 [2024-07-20 15:59:42.427563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:07.656 [2024-07-20 15:59:42.427573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:07.656 [2024-07-20 15:59:42.427586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:07.656 [2024-07-20 15:59:42.427596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:07.656 [2024-07-20 15:59:42.427619] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:24:07.656 [2024-07-20 15:59:42.427768] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:07.656 [2024-07-20 15:59:42.427786] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:07.656 [2024-07-20 15:59:42.427799] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:24:07.656 [2024-07-20 15:59:42.427815] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:07.656 [2024-07-20 15:59:42.427826] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:07.656 [2024-07-20 15:59:42.427839] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:07.656 [2024-07-20 15:59:42.427849] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:07.656 [2024-07-20 15:59:42.427870] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:07.656 [2024-07-20 15:59:42.427879] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:07.656 [2024-07-20 15:59:42.427891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:07.656 [2024-07-20 15:59:42.427900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:07.656 [2024-07-20 15:59:42.427913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:24:07.657 [2024-07-20 15:59:42.427922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:07.657 [2024-07-20 15:59:42.427994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:07.657 [2024-07-20 15:59:42.428004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:07.657 [2024-07-20 15:59:42.428019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:24:07.657 [2024-07-20 15:59:42.428028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:07.657 [2024-07-20 15:59:42.428129] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:07.657 [2024-07-20 15:59:42.428141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:07.657 [2024-07-20 15:59:42.428155] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:07.657 [2024-07-20 15:59:42.428165] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:07.657 [2024-07-20 15:59:42.428177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:07.657 [2024-07-20 15:59:42.428186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:07.657 [2024-07-20 15:59:42.428198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:07.657 [2024-07-20 15:59:42.428206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:07.657 [2024-07-20 15:59:42.428218] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:07.657 [2024-07-20 15:59:42.428226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:07.657 [2024-07-20 15:59:42.428238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:07.657 [2024-07-20 15:59:42.428247] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:07.657 [2024-07-20 15:59:42.428258] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:07.657 [2024-07-20 15:59:42.428267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:07.657 [2024-07-20 15:59:42.428281] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:07.657 [2024-07-20 15:59:42.428289] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:07.657 [2024-07-20 15:59:42.428301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:07.657 [2024-07-20 15:59:42.428310] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:07.657 [2024-07-20 15:59:42.428321] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:07.657 [2024-07-20 15:59:42.428329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:07.657 [2024-07-20 15:59:42.428340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:07.657 [2024-07-20 15:59:42.428349] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:07.657 [2024-07-20 15:59:42.428360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:07.657 [2024-07-20 15:59:42.428386] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:07.657 [2024-07-20 15:59:42.428397] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:07.657 [2024-07-20 15:59:42.428406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:07.657 [2024-07-20 15:59:42.428417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:07.657 [2024-07-20 15:59:42.428426] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:07.657 [2024-07-20 15:59:42.428438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:07.657 [2024-07-20 15:59:42.428446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:07.657 [2024-07-20 15:59:42.428477] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:07.657 [2024-07-20 15:59:42.428486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:07.657 [2024-07-20 15:59:42.428498] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:07.657 [2024-07-20 15:59:42.428507] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:07.657 [2024-07-20 15:59:42.428519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:07.657 [2024-07-20 15:59:42.428528] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:07.657 [2024-07-20 15:59:42.428539] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:07.657 [2024-07-20 15:59:42.428548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:07.657 [2024-07-20 15:59:42.428559] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:07.657 [2024-07-20 15:59:42.428575] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:07.657 [2024-07-20 15:59:42.428587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:07.657 [2024-07-20 15:59:42.428596] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:07.657 [2024-07-20 15:59:42.428607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:07.657 [2024-07-20 15:59:42.428616] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:07.657 [2024-07-20 15:59:42.428628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:07.657 [2024-07-20 15:59:42.428638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:07.657 [2024-07-20 15:59:42.428652] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:07.657 [2024-07-20 15:59:42.428665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:07.657 [2024-07-20 15:59:42.428678] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:07.657 [2024-07-20 15:59:42.428687] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:07.657 [2024-07-20 15:59:42.428699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:07.657 [2024-07-20 15:59:42.428709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:07.657 [2024-07-20 15:59:42.428721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:07.657 [2024-07-20 15:59:42.428735] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:07.657 [2024-07-20 15:59:42.428750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.657 [2024-07-20 15:59:42.428764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:07.657 [2024-07-20 15:59:42.428776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:07.657 [2024-07-20 15:59:42.428787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:07.657 [2024-07-20 15:59:42.428799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:07.657 [2024-07-20 15:59:42.428818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:07.657 [2024-07-20 15:59:42.428832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:07.657 [2024-07-20 15:59:42.428841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:07.657 [2024-07-20 15:59:42.428857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:07.657 [2024-07-20 15:59:42.428867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:07.657 [2024-07-20 15:59:42.428880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:07.657 [2024-07-20 15:59:42.428890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:07.657 [2024-07-20 15:59:42.428904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:07.657 [2024-07-20 15:59:42.428914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:07.657 [2024-07-20 15:59:42.428927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:07.657 [2024-07-20 15:59:42.428937] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:07.657 [2024-07-20 15:59:42.428951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.657 [2024-07-20 15:59:42.428961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:07.657 [2024-07-20 15:59:42.428974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:07.657 [2024-07-20 15:59:42.428985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:07.657 [2024-07-20 15:59:42.428997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:07.657 [2024-07-20 15:59:42.429009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:07.657 [2024-07-20 15:59:42.429023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:07.657 [2024-07-20 15:59:42.429033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.934 ms 00:24:07.657 [2024-07-20 15:59:42.429048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:07.657 [2024-07-20 15:59:42.429100] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:24:07.657 [2024-07-20 15:59:42.429115] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:24:10.942 [2024-07-20 15:59:45.721017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:10.942 [2024-07-20 15:59:45.721096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:24:10.942 [2024-07-20 15:59:45.721112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3297.261 ms 00:24:10.942 [2024-07-20 15:59:45.721140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:10.942 [2024-07-20 15:59:45.732041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:10.942 [2024-07-20 15:59:45.732092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:10.942 [2024-07-20 15:59:45.732108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.819 ms 00:24:10.942 [2024-07-20 15:59:45.732136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:10.942 [2024-07-20 15:59:45.732185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:10.942 [2024-07-20 15:59:45.732201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:10.942 [2024-07-20 15:59:45.732212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:24:10.942 [2024-07-20 15:59:45.732224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.742666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.742711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:11.201 [2024-07-20 15:59:45.742725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.404 ms 00:24:11.201 [2024-07-20 15:59:45.742738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.742775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.742788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:11.201 [2024-07-20 15:59:45.742800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:11.201 [2024-07-20 15:59:45.742812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.743268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.743294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:11.201 [2024-07-20 15:59:45.743305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.405 ms 00:24:11.201 [2024-07-20 15:59:45.743318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.743389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.743413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:11.201 [2024-07-20 15:59:45.743429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:24:11.201 [2024-07-20 15:59:45.743454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.750536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.750577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:11.201 [2024-07-20 15:59:45.750591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.066 ms 00:24:11.201 [2024-07-20 15:59:45.750603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.758277] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:11.201 [2024-07-20 15:59:45.759297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.759323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:11.201 [2024-07-20 15:59:45.759338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.636 ms 00:24:11.201 [2024-07-20 15:59:45.759348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.786615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.786661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:24:11.201 [2024-07-20 15:59:45.786684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.253 ms 00:24:11.201 [2024-07-20 15:59:45.786696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.786801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.786816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:11.201 [2024-07-20 15:59:45.786841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:24:11.201 [2024-07-20 15:59:45.786853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.790015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.790051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:24:11.201 [2024-07-20 15:59:45.790067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.138 ms 00:24:11.201 [2024-07-20 15:59:45.790080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.792939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.792972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:24:11.201 [2024-07-20 15:59:45.792986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.822 ms 00:24:11.201 [2024-07-20 15:59:45.792996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.793256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.793270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:11.201 [2024-07-20 15:59:45.793284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.225 ms 00:24:11.201 [2024-07-20 15:59:45.793302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.832575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.832747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:24:11.201 [2024-07-20 15:59:45.832838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.287 ms 00:24:11.201 [2024-07-20 15:59:45.832879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.837108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.837246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:24:11.201 [2024-07-20 15:59:45.837272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.170 ms 00:24:11.201 [2024-07-20 15:59:45.837283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.840334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.840379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:24:11.201 [2024-07-20 15:59:45.840395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.004 ms 00:24:11.201 [2024-07-20 15:59:45.840404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.843944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.843978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:24:11.201 [2024-07-20 15:59:45.843993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.505 ms 00:24:11.201 [2024-07-20 15:59:45.844003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.844048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.844060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:11.201 [2024-07-20 15:59:45.844074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:24:11.201 [2024-07-20 15:59:45.844084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.844146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:11.201 [2024-07-20 15:59:45.844163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:11.201 [2024-07-20 15:59:45.844176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:24:11.201 [2024-07-20 15:59:45.844189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:11.201 [2024-07-20 15:59:45.845151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3434.597 ms, result 0 00:24:11.201 { 00:24:11.201 "name": "ftl", 00:24:11.201 "uuid": "5b27914f-95a9-438a-8e33-df91801c691f" 00:24:11.201 } 00:24:11.201 15:59:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:24:11.460 [2024-07-20 15:59:46.041420] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.460 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:24:11.460 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:24:11.719 [2024-07-20 15:59:46.393083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:11.719 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:24:11.977 [2024-07-20 15:59:46.573206] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:11.977 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:12.236 Fill FTL, iteration 1 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=93641 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 93641 /var/tmp/spdk.tgt.sock 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 93641 ']' 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:24:12.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:12.236 15:59:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:12.236 [2024-07-20 15:59:46.996103] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:12.236 [2024-07-20 15:59:46.996259] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93641 ] 00:24:12.495 [2024-07-20 15:59:47.147047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.495 [2024-07-20 15:59:47.189974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.061 15:59:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:13.061 15:59:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:13.061 15:59:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:24:13.319 ftln1 00:24:13.319 15:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:24:13.319 15:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:24:13.579 15:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:24:13.579 15:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 93641 00:24:13.579 15:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 93641 ']' 00:24:13.579 15:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 93641 00:24:13.579 15:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:24:13.579 15:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:13.579 15:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93641 00:24:13.579 15:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:13.579 15:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:13.579 killing process with pid 93641 00:24:13.579 15:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93641' 00:24:13.579 15:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 93641 00:24:13.579 15:59:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 93641 00:24:13.837 15:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:24:13.837 15:59:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:14.096 [2024-07-20 15:59:48.700253] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:14.096 [2024-07-20 15:59:48.700398] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93677 ] 00:24:14.096 [2024-07-20 15:59:48.851834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.355 [2024-07-20 15:59:48.896382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.558  Copying: 252/1024 [MB] (252 MBps) Copying: 507/1024 [MB] (255 MBps) Copying: 762/1024 [MB] (255 MBps) Copying: 1020/1024 [MB] (258 MBps) Copying: 1024/1024 [MB] (average 254 MBps) 00:24:18.558 00:24:18.817 Calculate MD5 checksum, iteration 1 00:24:18.817 15:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:24:18.817 15:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:24:18.817 15:59:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:18.817 15:59:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:18.817 15:59:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:18.817 15:59:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:18.817 15:59:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:18.817 15:59:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:18.817 [2024-07-20 15:59:53.432972] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:18.817 [2024-07-20 15:59:53.433316] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93730 ] 00:24:18.817 [2024-07-20 15:59:53.584574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.075 [2024-07-20 15:59:53.626872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.897  Copying: 728/1024 [MB] (728 MBps) Copying: 1024/1024 [MB] (average 713 MBps) 00:24:20.897 00:24:20.897 15:59:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:24:20.897 15:59:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:22.797 15:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:24:22.797 Fill FTL, iteration 2 00:24:22.797 15:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=cab7ee4c8b411329e3ccfea62b7e5f5b 00:24:22.797 15:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:24:22.797 15:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:22.797 15:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:24:22.797 15:59:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:22.797 15:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:22.797 15:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:22.797 15:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:22.797 15:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:22.797 15:59:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:22.797 [2024-07-20 15:59:57.252113] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:22.797 [2024-07-20 15:59:57.252240] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93769 ] 00:24:22.797 [2024-07-20 15:59:57.400968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.797 [2024-07-20 15:59:57.443381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.236  Copying: 260/1024 [MB] (260 MBps) Copying: 521/1024 [MB] (261 MBps) Copying: 777/1024 [MB] (256 MBps) Copying: 1024/1024 [MB] (average 258 MBps) 00:24:27.236 00:24:27.236 Calculate MD5 checksum, iteration 2 00:24:27.236 16:00:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:24:27.236 16:00:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:24:27.236 16:00:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:27.236 16:00:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:27.236 16:00:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:27.236 16:00:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:27.236 16:00:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:27.236 16:00:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:27.236 [2024-07-20 16:00:01.921050] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:27.236 [2024-07-20 16:00:01.921192] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93822 ] 00:24:27.499 [2024-07-20 16:00:02.070246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.499 [2024-07-20 16:00:02.112756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.005  Copying: 730/1024 [MB] (730 MBps) Copying: 1024/1024 [MB] (average 710 MBps) 00:24:30.006 00:24:30.006 16:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:24:30.006 16:00:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:31.907 16:00:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:24:31.907 16:00:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=437d6c33f93793a16c57d5dec096f39e 00:24:31.907 16:00:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:24:31.907 16:00:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:31.907 16:00:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:31.907 [2024-07-20 16:00:06.539248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:31.907 [2024-07-20 16:00:06.539309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:31.907 [2024-07-20 16:00:06.539324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:24:31.907 [2024-07-20 16:00:06.539339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:31.907 [2024-07-20 16:00:06.539377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:31.907 [2024-07-20 16:00:06.539394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:31.907 [2024-07-20 16:00:06.539405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:31.907 [2024-07-20 16:00:06.539423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:31.907 [2024-07-20 16:00:06.539444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:31.907 [2024-07-20 16:00:06.539455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:31.907 [2024-07-20 16:00:06.539465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:31.907 [2024-07-20 16:00:06.539475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:31.907 [2024-07-20 16:00:06.539538] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.290 ms, result 0 00:24:31.907 true 00:24:31.907 16:00:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:32.170 { 00:24:32.170 "name": "ftl", 00:24:32.170 "properties": [ 00:24:32.170 { 00:24:32.170 "name": "superblock_version", 00:24:32.170 "value": 5, 00:24:32.170 "read-only": true 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "name": "base_device", 00:24:32.170 "bands": [ 00:24:32.170 { 00:24:32.170 "id": 0, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 1, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 2, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 3, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 4, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 5, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 6, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 7, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 8, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 9, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 10, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 11, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 12, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 13, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 14, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 15, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 16, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 }, 00:24:32.170 { 00:24:32.170 "id": 17, 00:24:32.170 "state": "FREE", 00:24:32.170 "validity": 0.0 00:24:32.170 } 00:24:32.170 ], 00:24:32.171 "read-only": true 00:24:32.171 }, 00:24:32.171 { 00:24:32.171 "name": "cache_device", 00:24:32.171 "type": "bdev", 00:24:32.171 "chunks": [ 00:24:32.171 { 00:24:32.171 "id": 0, 00:24:32.171 "state": "INACTIVE", 00:24:32.171 "utilization": 0.0 00:24:32.171 }, 00:24:32.171 { 00:24:32.171 "id": 1, 00:24:32.171 "state": "CLOSED", 00:24:32.171 "utilization": 1.0 00:24:32.171 }, 00:24:32.171 { 00:24:32.171 "id": 2, 00:24:32.171 "state": "CLOSED", 00:24:32.171 "utilization": 1.0 00:24:32.171 }, 00:24:32.171 { 00:24:32.171 "id": 3, 00:24:32.171 "state": "OPEN", 00:24:32.171 "utilization": 0.001953125 00:24:32.171 }, 00:24:32.171 { 00:24:32.171 "id": 4, 00:24:32.171 "state": "OPEN", 00:24:32.171 "utilization": 0.0 00:24:32.171 } 00:24:32.171 ], 00:24:32.171 "read-only": true 00:24:32.171 }, 00:24:32.171 { 00:24:32.171 "name": "verbose_mode", 00:24:32.171 "value": true, 00:24:32.171 "unit": "", 00:24:32.171 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:32.171 }, 00:24:32.171 { 00:24:32.171 "name": "prep_upgrade_on_shutdown", 00:24:32.171 "value": false, 00:24:32.171 "unit": "", 00:24:32.171 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:32.171 } 00:24:32.171 ] 00:24:32.171 } 00:24:32.171 16:00:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:24:32.171 [2024-07-20 16:00:06.918412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.171 [2024-07-20 16:00:06.918460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:32.171 [2024-07-20 16:00:06.918476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:24:32.171 [2024-07-20 16:00:06.918487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.171 [2024-07-20 16:00:06.918513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.171 [2024-07-20 16:00:06.918524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:32.171 [2024-07-20 16:00:06.918533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:32.171 [2024-07-20 16:00:06.918543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.171 [2024-07-20 16:00:06.918563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.171 [2024-07-20 16:00:06.918573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:32.171 [2024-07-20 16:00:06.918583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:32.171 [2024-07-20 16:00:06.918593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.171 [2024-07-20 16:00:06.918653] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.257 ms, result 0 00:24:32.171 true 00:24:32.171 16:00:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:24:32.171 16:00:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:24:32.171 16:00:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:32.433 16:00:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:24:32.433 16:00:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:24:32.433 16:00:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:32.705 [2024-07-20 16:00:07.274325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.705 [2024-07-20 16:00:07.274376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:32.705 [2024-07-20 16:00:07.274401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:24:32.705 [2024-07-20 16:00:07.274411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.705 [2024-07-20 16:00:07.274437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.705 [2024-07-20 16:00:07.274448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:32.705 [2024-07-20 16:00:07.274459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:32.705 [2024-07-20 16:00:07.274468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.705 [2024-07-20 16:00:07.274487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.705 [2024-07-20 16:00:07.274497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:32.705 [2024-07-20 16:00:07.274508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:32.705 [2024-07-20 16:00:07.274517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.705 [2024-07-20 16:00:07.274576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.240 ms, result 0 00:24:32.705 true 00:24:32.705 16:00:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:32.705 { 00:24:32.705 "name": "ftl", 00:24:32.705 "properties": [ 00:24:32.705 { 00:24:32.705 "name": "superblock_version", 00:24:32.705 "value": 5, 00:24:32.705 "read-only": true 00:24:32.705 }, 00:24:32.705 { 00:24:32.705 "name": "base_device", 00:24:32.705 "bands": [ 00:24:32.705 { 00:24:32.705 "id": 0, 00:24:32.705 "state": "FREE", 00:24:32.705 "validity": 0.0 00:24:32.705 }, 00:24:32.705 { 00:24:32.705 "id": 1, 00:24:32.705 "state": "FREE", 00:24:32.705 "validity": 0.0 00:24:32.705 }, 00:24:32.705 { 00:24:32.705 "id": 2, 00:24:32.705 "state": "FREE", 00:24:32.705 "validity": 0.0 00:24:32.705 }, 00:24:32.705 { 00:24:32.705 "id": 3, 00:24:32.705 "state": "FREE", 00:24:32.705 "validity": 0.0 00:24:32.705 }, 00:24:32.705 { 00:24:32.706 "id": 4, 00:24:32.706 "state": "FREE", 00:24:32.706 "validity": 0.0 00:24:32.706 }, 00:24:32.706 { 00:24:32.706 "id": 5, 00:24:32.706 "state": "FREE", 00:24:32.706 "validity": 0.0 00:24:32.706 }, 00:24:32.706 { 00:24:32.706 "id": 6, 00:24:32.706 "state": "FREE", 00:24:32.706 "validity": 0.0 00:24:32.706 }, 00:24:32.706 { 00:24:32.706 "id": 7, 00:24:32.706 "state": "FREE", 00:24:32.706 "validity": 0.0 00:24:32.706 }, 00:24:32.706 { 00:24:32.706 "id": 8, 00:24:32.706 "state": "FREE", 00:24:32.706 "validity": 0.0 00:24:32.706 }, 00:24:32.706 { 00:24:32.706 "id": 9, 00:24:32.706 "state": "FREE", 00:24:32.706 "validity": 0.0 00:24:32.706 }, 00:24:32.706 { 00:24:32.706 "id": 10, 00:24:32.706 "state": "FREE", 00:24:32.706 "validity": 0.0 00:24:32.706 }, 00:24:32.706 { 00:24:32.706 "id": 11, 00:24:32.706 "state": "FREE", 00:24:32.706 "validity": 0.0 00:24:32.706 }, 00:24:32.706 { 00:24:32.706 "id": 12, 00:24:32.706 "state": "FREE", 00:24:32.706 "validity": 0.0 00:24:32.706 }, 00:24:32.706 { 00:24:32.706 "id": 13, 00:24:32.706 "state": "FREE", 00:24:32.706 "validity": 0.0 00:24:32.706 }, 00:24:32.706 { 00:24:32.707 "id": 14, 00:24:32.707 "state": "FREE", 00:24:32.707 "validity": 0.0 00:24:32.707 }, 00:24:32.707 { 00:24:32.707 "id": 15, 00:24:32.707 "state": "FREE", 00:24:32.707 "validity": 0.0 00:24:32.707 }, 00:24:32.707 { 00:24:32.707 "id": 16, 00:24:32.707 "state": "FREE", 00:24:32.707 "validity": 0.0 00:24:32.707 }, 00:24:32.707 { 00:24:32.707 "id": 17, 00:24:32.707 "state": "FREE", 00:24:32.707 "validity": 0.0 00:24:32.707 } 00:24:32.707 ], 00:24:32.707 "read-only": true 00:24:32.707 }, 00:24:32.707 { 00:24:32.707 "name": "cache_device", 00:24:32.707 "type": "bdev", 00:24:32.707 "chunks": [ 00:24:32.707 { 00:24:32.707 "id": 0, 00:24:32.707 "state": "INACTIVE", 00:24:32.707 "utilization": 0.0 00:24:32.707 }, 00:24:32.707 { 00:24:32.707 "id": 1, 00:24:32.707 "state": "CLOSED", 00:24:32.707 "utilization": 1.0 00:24:32.707 }, 00:24:32.707 { 00:24:32.707 "id": 2, 00:24:32.707 "state": "CLOSED", 00:24:32.707 "utilization": 1.0 00:24:32.707 }, 00:24:32.707 { 00:24:32.707 "id": 3, 00:24:32.707 "state": "OPEN", 00:24:32.707 "utilization": 0.001953125 00:24:32.707 }, 00:24:32.707 { 00:24:32.707 "id": 4, 00:24:32.707 "state": "OPEN", 00:24:32.707 "utilization": 0.0 00:24:32.708 } 00:24:32.708 ], 00:24:32.708 "read-only": true 00:24:32.708 }, 00:24:32.708 { 00:24:32.708 "name": "verbose_mode", 00:24:32.708 "value": true, 00:24:32.708 "unit": "", 00:24:32.708 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:32.708 }, 00:24:32.708 { 00:24:32.708 "name": "prep_upgrade_on_shutdown", 00:24:32.708 "value": true, 00:24:32.708 "unit": "", 00:24:32.708 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:32.708 } 00:24:32.708 ] 00:24:32.708 } 00:24:32.971 16:00:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:24:32.971 16:00:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93530 ]] 00:24:32.971 16:00:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93530 00:24:32.971 16:00:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 93530 ']' 00:24:32.971 16:00:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 93530 00:24:32.971 16:00:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:24:32.971 16:00:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:32.971 16:00:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93530 00:24:32.971 16:00:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:32.971 killing process with pid 93530 00:24:32.971 16:00:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:32.971 16:00:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93530' 00:24:32.971 16:00:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 93530 00:24:32.971 16:00:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 93530 00:24:32.971 [2024-07-20 16:00:07.664829] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:24:32.971 [2024-07-20 16:00:07.669788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.971 [2024-07-20 16:00:07.669828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:24:32.971 [2024-07-20 16:00:07.669850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:32.971 [2024-07-20 16:00:07.669862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:32.971 [2024-07-20 16:00:07.669886] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:24:32.971 [2024-07-20 16:00:07.670553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:32.971 [2024-07-20 16:00:07.670573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:24:32.971 [2024-07-20 16:00:07.670584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.653 ms 00:24:32.971 [2024-07-20 16:00:07.670594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.077 [2024-07-20 16:00:14.927648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.077 [2024-07-20 16:00:14.927700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:24:41.077 [2024-07-20 16:00:14.927718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7268.810 ms 00:24:41.077 [2024-07-20 16:00:14.927741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.077 [2024-07-20 16:00:14.928845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.077 [2024-07-20 16:00:14.928875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:24:41.077 [2024-07-20 16:00:14.928887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.071 ms 00:24:41.077 [2024-07-20 16:00:14.928897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.077 [2024-07-20 16:00:14.929836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.077 [2024-07-20 16:00:14.929857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:24:41.077 [2024-07-20 16:00:14.929869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.911 ms 00:24:41.077 [2024-07-20 16:00:14.929879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.077 [2024-07-20 16:00:14.931935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.077 [2024-07-20 16:00:14.931971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:24:41.077 [2024-07-20 16:00:14.931983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.009 ms 00:24:41.077 [2024-07-20 16:00:14.931993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.077 [2024-07-20 16:00:14.934874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.077 [2024-07-20 16:00:14.934914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:24:41.077 [2024-07-20 16:00:14.934927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.857 ms 00:24:41.077 [2024-07-20 16:00:14.934937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.077 [2024-07-20 16:00:14.934999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.077 [2024-07-20 16:00:14.935012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:24:41.077 [2024-07-20 16:00:14.935022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:24:41.077 [2024-07-20 16:00:14.935037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.077 [2024-07-20 16:00:14.936465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.077 [2024-07-20 16:00:14.936498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:24:41.077 [2024-07-20 16:00:14.936509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.414 ms 00:24:41.077 [2024-07-20 16:00:14.936519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.077 [2024-07-20 16:00:14.938492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.077 [2024-07-20 16:00:14.938524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:24:41.077 [2024-07-20 16:00:14.938535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.947 ms 00:24:41.077 [2024-07-20 16:00:14.938544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.077 [2024-07-20 16:00:14.940094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.077 [2024-07-20 16:00:14.940128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:24:41.077 [2024-07-20 16:00:14.940139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.523 ms 00:24:41.077 [2024-07-20 16:00:14.940149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.077 [2024-07-20 16:00:14.941328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.077 [2024-07-20 16:00:14.941377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:24:41.077 [2024-07-20 16:00:14.941388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.129 ms 00:24:41.077 [2024-07-20 16:00:14.941397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.077 [2024-07-20 16:00:14.941425] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:24:41.077 [2024-07-20 16:00:14.941441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:41.077 [2024-07-20 16:00:14.941454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:24:41.077 [2024-07-20 16:00:14.941464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:24:41.077 [2024-07-20 16:00:14.941475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:41.077 [2024-07-20 16:00:14.941630] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:24:41.077 [2024-07-20 16:00:14.941639] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5b27914f-95a9-438a-8e33-df91801c691f 00:24:41.077 [2024-07-20 16:00:14.941650] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:24:41.077 [2024-07-20 16:00:14.941659] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:24:41.078 [2024-07-20 16:00:14.941669] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:24:41.078 [2024-07-20 16:00:14.941679] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:24:41.078 [2024-07-20 16:00:14.941688] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:24:41.078 [2024-07-20 16:00:14.941708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:24:41.078 [2024-07-20 16:00:14.941721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:24:41.078 [2024-07-20 16:00:14.941730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:24:41.078 [2024-07-20 16:00:14.941739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:24:41.078 [2024-07-20 16:00:14.941749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.078 [2024-07-20 16:00:14.941759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:24:41.078 [2024-07-20 16:00:14.941770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.325 ms 00:24:41.078 [2024-07-20 16:00:14.941779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.943534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.078 [2024-07-20 16:00:14.943557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:24:41.078 [2024-07-20 16:00:14.943569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.742 ms 00:24:41.078 [2024-07-20 16:00:14.943583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.943695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.078 [2024-07-20 16:00:14.943705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:24:41.078 [2024-07-20 16:00:14.943716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.093 ms 00:24:41.078 [2024-07-20 16:00:14.943725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.950613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:41.078 [2024-07-20 16:00:14.950646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:41.078 [2024-07-20 16:00:14.950659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:41.078 [2024-07-20 16:00:14.950674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.950701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:41.078 [2024-07-20 16:00:14.950712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:41.078 [2024-07-20 16:00:14.950721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:41.078 [2024-07-20 16:00:14.950731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.950782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:41.078 [2024-07-20 16:00:14.950794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:41.078 [2024-07-20 16:00:14.950804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:41.078 [2024-07-20 16:00:14.950822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.950843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:41.078 [2024-07-20 16:00:14.950853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:41.078 [2024-07-20 16:00:14.950863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:41.078 [2024-07-20 16:00:14.950872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.962553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:41.078 [2024-07-20 16:00:14.962598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:41.078 [2024-07-20 16:00:14.962611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:41.078 [2024-07-20 16:00:14.962639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.970816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:41.078 [2024-07-20 16:00:14.970856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:41.078 [2024-07-20 16:00:14.970868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:41.078 [2024-07-20 16:00:14.970879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.970943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:41.078 [2024-07-20 16:00:14.970954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:41.078 [2024-07-20 16:00:14.970976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:41.078 [2024-07-20 16:00:14.970986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.971020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:41.078 [2024-07-20 16:00:14.971040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:41.078 [2024-07-20 16:00:14.971051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:41.078 [2024-07-20 16:00:14.971071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.971150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:41.078 [2024-07-20 16:00:14.971163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:41.078 [2024-07-20 16:00:14.971173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:41.078 [2024-07-20 16:00:14.971183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.971216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:41.078 [2024-07-20 16:00:14.971231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:24:41.078 [2024-07-20 16:00:14.971241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:41.078 [2024-07-20 16:00:14.971251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.971294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:41.078 [2024-07-20 16:00:14.971304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:41.078 [2024-07-20 16:00:14.971315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:41.078 [2024-07-20 16:00:14.971324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.971381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:41.078 [2024-07-20 16:00:14.971403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:41.078 [2024-07-20 16:00:14.971413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:41.078 [2024-07-20 16:00:14.971423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.078 [2024-07-20 16:00:14.971541] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7313.581 ms, result 0 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93990 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93990 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 93990 ']' 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:42.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:42.453 16:00:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:42.453 [2024-07-20 16:00:17.207504] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:42.453 [2024-07-20 16:00:17.207647] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93990 ] 00:24:42.712 [2024-07-20 16:00:17.356796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.712 [2024-07-20 16:00:17.397929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.971 [2024-07-20 16:00:17.693994] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:42.971 [2024-07-20 16:00:17.694063] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:43.241 [2024-07-20 16:00:17.830873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:43.241 [2024-07-20 16:00:17.830919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:43.241 [2024-07-20 16:00:17.830935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:43.241 [2024-07-20 16:00:17.830945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:43.241 [2024-07-20 16:00:17.831015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:43.241 [2024-07-20 16:00:17.831027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:43.241 [2024-07-20 16:00:17.831038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:24:43.241 [2024-07-20 16:00:17.831053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:43.241 [2024-07-20 16:00:17.831075] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:43.241 [2024-07-20 16:00:17.831418] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:43.241 [2024-07-20 16:00:17.831439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:43.241 [2024-07-20 16:00:17.831450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:43.241 [2024-07-20 16:00:17.831460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.368 ms 00:24:43.241 [2024-07-20 16:00:17.831470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:43.241 [2024-07-20 16:00:17.832857] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:24:43.241 [2024-07-20 16:00:17.835260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:43.241 [2024-07-20 16:00:17.835304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:24:43.241 [2024-07-20 16:00:17.835320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.408 ms 00:24:43.241 [2024-07-20 16:00:17.835330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:43.241 [2024-07-20 16:00:17.835398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:43.241 [2024-07-20 16:00:17.835411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:24:43.241 [2024-07-20 16:00:17.835430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:24:43.241 [2024-07-20 16:00:17.835439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:43.241 [2024-07-20 16:00:17.842077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:43.241 [2024-07-20 16:00:17.842103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:43.241 [2024-07-20 16:00:17.842114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.570 ms 00:24:43.241 [2024-07-20 16:00:17.842136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:43.241 [2024-07-20 16:00:17.842183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:43.241 [2024-07-20 16:00:17.842201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:43.241 [2024-07-20 16:00:17.842212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:24:43.241 [2024-07-20 16:00:17.842233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:43.241 [2024-07-20 16:00:17.842285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:43.241 [2024-07-20 16:00:17.842297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:43.241 [2024-07-20 16:00:17.842307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:24:43.241 [2024-07-20 16:00:17.842324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:43.241 [2024-07-20 16:00:17.842352] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:43.241 [2024-07-20 16:00:17.843948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:43.241 [2024-07-20 16:00:17.843972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:43.241 [2024-07-20 16:00:17.843983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.605 ms 00:24:43.241 [2024-07-20 16:00:17.843996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:43.242 [2024-07-20 16:00:17.844029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:43.242 [2024-07-20 16:00:17.844048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:43.242 [2024-07-20 16:00:17.844058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:43.242 [2024-07-20 16:00:17.844068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:43.242 [2024-07-20 16:00:17.844107] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:24:43.242 [2024-07-20 16:00:17.844131] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:24:43.242 [2024-07-20 16:00:17.844165] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:24:43.242 [2024-07-20 16:00:17.844185] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:24:43.242 [2024-07-20 16:00:17.844268] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:43.242 [2024-07-20 16:00:17.844280] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:43.242 [2024-07-20 16:00:17.844294] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:24:43.242 [2024-07-20 16:00:17.844307] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:43.242 [2024-07-20 16:00:17.844319] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:43.242 [2024-07-20 16:00:17.844330] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:43.242 [2024-07-20 16:00:17.844340] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:43.242 [2024-07-20 16:00:17.844363] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:43.242 [2024-07-20 16:00:17.844375] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:43.242 [2024-07-20 16:00:17.844388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:43.242 [2024-07-20 16:00:17.844406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:43.242 [2024-07-20 16:00:17.844416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.284 ms 00:24:43.242 [2024-07-20 16:00:17.844426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:43.242 [2024-07-20 16:00:17.844497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:43.242 [2024-07-20 16:00:17.844508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:43.242 [2024-07-20 16:00:17.844517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:24:43.242 [2024-07-20 16:00:17.844528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:43.242 [2024-07-20 16:00:17.844616] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:43.242 [2024-07-20 16:00:17.844638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:43.242 [2024-07-20 16:00:17.844648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:43.242 [2024-07-20 16:00:17.844667] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:43.242 [2024-07-20 16:00:17.844677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:43.242 [2024-07-20 16:00:17.844687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:43.242 [2024-07-20 16:00:17.844700] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:43.242 [2024-07-20 16:00:17.844709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:43.242 [2024-07-20 16:00:17.844719] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:43.242 [2024-07-20 16:00:17.844728] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:43.242 [2024-07-20 16:00:17.844737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:43.242 [2024-07-20 16:00:17.844747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:43.242 [2024-07-20 16:00:17.844756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:43.242 [2024-07-20 16:00:17.844765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:43.243 [2024-07-20 16:00:17.844775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:43.243 [2024-07-20 16:00:17.844784] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:43.243 [2024-07-20 16:00:17.844793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:43.243 [2024-07-20 16:00:17.844802] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:43.243 [2024-07-20 16:00:17.844811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:43.243 [2024-07-20 16:00:17.844821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:43.243 [2024-07-20 16:00:17.844830] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:43.243 [2024-07-20 16:00:17.844838] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:43.243 [2024-07-20 16:00:17.844850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:43.243 [2024-07-20 16:00:17.844863] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:43.243 [2024-07-20 16:00:17.844873] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:43.243 [2024-07-20 16:00:17.844882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:43.243 [2024-07-20 16:00:17.844891] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:43.243 [2024-07-20 16:00:17.844900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:43.243 [2024-07-20 16:00:17.844909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:43.243 [2024-07-20 16:00:17.844918] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:43.243 [2024-07-20 16:00:17.844927] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:43.243 [2024-07-20 16:00:17.844936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:43.243 [2024-07-20 16:00:17.844946] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:43.243 [2024-07-20 16:00:17.844955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:43.243 [2024-07-20 16:00:17.844964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:43.243 [2024-07-20 16:00:17.844973] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:43.243 [2024-07-20 16:00:17.844982] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:43.243 [2024-07-20 16:00:17.844991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:43.243 [2024-07-20 16:00:17.845003] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:43.243 [2024-07-20 16:00:17.845012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:43.243 [2024-07-20 16:00:17.845021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:43.243 [2024-07-20 16:00:17.845030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:43.243 [2024-07-20 16:00:17.845039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:43.243 [2024-07-20 16:00:17.845048] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:43.243 [2024-07-20 16:00:17.845058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:43.243 [2024-07-20 16:00:17.845067] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:43.243 [2024-07-20 16:00:17.845078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:43.243 [2024-07-20 16:00:17.845088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:43.243 [2024-07-20 16:00:17.845097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:43.243 [2024-07-20 16:00:17.845107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:43.243 [2024-07-20 16:00:17.845116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:43.243 [2024-07-20 16:00:17.845125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:43.243 [2024-07-20 16:00:17.845134] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:43.243 [2024-07-20 16:00:17.845145] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:43.243 [2024-07-20 16:00:17.845162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.243 [2024-07-20 16:00:17.845177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:43.244 [2024-07-20 16:00:17.845188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:43.244 [2024-07-20 16:00:17.845198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:43.244 [2024-07-20 16:00:17.845209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:43.244 [2024-07-20 16:00:17.845219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:43.244 [2024-07-20 16:00:17.845230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:43.244 [2024-07-20 16:00:17.845240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:43.244 [2024-07-20 16:00:17.845250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:43.244 [2024-07-20 16:00:17.845261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:43.244 [2024-07-20 16:00:17.845271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:43.244 [2024-07-20 16:00:17.845282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:43.244 [2024-07-20 16:00:17.845292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:43.244 [2024-07-20 16:00:17.845303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:43.244 [2024-07-20 16:00:17.845313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:43.244 [2024-07-20 16:00:17.845323] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:43.244 [2024-07-20 16:00:17.845340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.244 [2024-07-20 16:00:17.845351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:43.244 [2024-07-20 16:00:17.845372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:43.244 [2024-07-20 16:00:17.845382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:43.244 [2024-07-20 16:00:17.845392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:43.244 [2024-07-20 16:00:17.845403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:43.244 [2024-07-20 16:00:17.845413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:43.244 [2024-07-20 16:00:17.845424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.839 ms 00:24:43.244 [2024-07-20 16:00:17.845433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:43.244 [2024-07-20 16:00:17.845479] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:24:43.244 [2024-07-20 16:00:17.845491] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:24:48.511 [2024-07-20 16:00:22.778895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.778962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:24:48.511 [2024-07-20 16:00:22.778992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4941.424 ms 00:24:48.511 [2024-07-20 16:00:22.779003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.789934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.789982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:48.511 [2024-07-20 16:00:22.790015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.848 ms 00:24:48.511 [2024-07-20 16:00:22.790039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.790100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.790112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:48.511 [2024-07-20 16:00:22.790137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:24:48.511 [2024-07-20 16:00:22.790148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.800743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.800786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:48.511 [2024-07-20 16:00:22.800810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.535 ms 00:24:48.511 [2024-07-20 16:00:22.800821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.800861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.800883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:48.511 [2024-07-20 16:00:22.800894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:48.511 [2024-07-20 16:00:22.800904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.801396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.801411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:48.511 [2024-07-20 16:00:22.801422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.444 ms 00:24:48.511 [2024-07-20 16:00:22.801432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.801472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.801482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:48.511 [2024-07-20 16:00:22.801504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:24:48.511 [2024-07-20 16:00:22.801515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.808656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.808694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:48.511 [2024-07-20 16:00:22.808715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.128 ms 00:24:48.511 [2024-07-20 16:00:22.808726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.811363] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:48.511 [2024-07-20 16:00:22.811414] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:24:48.511 [2024-07-20 16:00:22.811429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.811456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:24:48.511 [2024-07-20 16:00:22.811468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.617 ms 00:24:48.511 [2024-07-20 16:00:22.811478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.814882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.814926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:24:48.511 [2024-07-20 16:00:22.814960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.355 ms 00:24:48.511 [2024-07-20 16:00:22.814971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.816413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.816447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:24:48.511 [2024-07-20 16:00:22.816459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.400 ms 00:24:48.511 [2024-07-20 16:00:22.816469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.817849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.817883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:24:48.511 [2024-07-20 16:00:22.817895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.344 ms 00:24:48.511 [2024-07-20 16:00:22.817905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.818203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.818222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:48.511 [2024-07-20 16:00:22.818234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.227 ms 00:24:48.511 [2024-07-20 16:00:22.818255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.856077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.856154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:24:48.511 [2024-07-20 16:00:22.856196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.858 ms 00:24:48.511 [2024-07-20 16:00:22.856213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.864605] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:48.511 [2024-07-20 16:00:22.865383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.865404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:48.511 [2024-07-20 16:00:22.865417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.102 ms 00:24:48.511 [2024-07-20 16:00:22.865428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.865510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.865532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:24:48.511 [2024-07-20 16:00:22.865545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:24:48.511 [2024-07-20 16:00:22.865557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.865611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.865629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:48.511 [2024-07-20 16:00:22.865641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:24:48.511 [2024-07-20 16:00:22.865651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.865678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.865690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:48.511 [2024-07-20 16:00:22.865700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:24:48.511 [2024-07-20 16:00:22.865712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.865750] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:24:48.511 [2024-07-20 16:00:22.865764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.865775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:24:48.511 [2024-07-20 16:00:22.865790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:24:48.511 [2024-07-20 16:00:22.865800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.869252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.869298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:24:48.511 [2024-07-20 16:00:22.869312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.433 ms 00:24:48.511 [2024-07-20 16:00:22.869324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.869412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.511 [2024-07-20 16:00:22.869426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:48.511 [2024-07-20 16:00:22.869444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:24:48.511 [2024-07-20 16:00:22.869455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.511 [2024-07-20 16:00:22.870700] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5047.517 ms, result 0 00:24:48.511 [2024-07-20 16:00:22.885966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.511 [2024-07-20 16:00:22.901968] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:48.511 [2024-07-20 16:00:22.910073] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:48.511 16:00:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:48.511 16:00:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:48.511 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:48.511 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:24:48.511 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:48.769 [2024-07-20 16:00:23.313666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.769 [2024-07-20 16:00:23.313717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:48.769 [2024-07-20 16:00:23.313733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:24:48.769 [2024-07-20 16:00:23.313744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.769 [2024-07-20 16:00:23.313769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.769 [2024-07-20 16:00:23.313780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:48.769 [2024-07-20 16:00:23.313791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:48.769 [2024-07-20 16:00:23.313801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.769 [2024-07-20 16:00:23.313825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:48.769 [2024-07-20 16:00:23.313836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:48.769 [2024-07-20 16:00:23.313847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:48.769 [2024-07-20 16:00:23.313857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:48.769 [2024-07-20 16:00:23.313915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.249 ms, result 0 00:24:48.769 true 00:24:48.769 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:48.769 { 00:24:48.769 "name": "ftl", 00:24:48.769 "properties": [ 00:24:48.769 { 00:24:48.769 "name": "superblock_version", 00:24:48.769 "value": 5, 00:24:48.769 "read-only": true 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "name": "base_device", 00:24:48.769 "bands": [ 00:24:48.769 { 00:24:48.769 "id": 0, 00:24:48.769 "state": "CLOSED", 00:24:48.769 "validity": 1.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 1, 00:24:48.769 "state": "CLOSED", 00:24:48.769 "validity": 1.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 2, 00:24:48.769 "state": "CLOSED", 00:24:48.769 "validity": 0.007843137254901933 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 3, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 4, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 5, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 6, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 7, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 8, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 9, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 10, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 11, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 12, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 13, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 14, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 15, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 16, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 17, 00:24:48.769 "state": "FREE", 00:24:48.769 "validity": 0.0 00:24:48.769 } 00:24:48.769 ], 00:24:48.769 "read-only": true 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "name": "cache_device", 00:24:48.769 "type": "bdev", 00:24:48.769 "chunks": [ 00:24:48.769 { 00:24:48.769 "id": 0, 00:24:48.769 "state": "INACTIVE", 00:24:48.769 "utilization": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 1, 00:24:48.769 "state": "OPEN", 00:24:48.769 "utilization": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 2, 00:24:48.769 "state": "OPEN", 00:24:48.769 "utilization": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 3, 00:24:48.769 "state": "FREE", 00:24:48.769 "utilization": 0.0 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "id": 4, 00:24:48.769 "state": "FREE", 00:24:48.769 "utilization": 0.0 00:24:48.769 } 00:24:48.769 ], 00:24:48.769 "read-only": true 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "name": "verbose_mode", 00:24:48.769 "value": true, 00:24:48.769 "unit": "", 00:24:48.769 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:48.769 }, 00:24:48.769 { 00:24:48.769 "name": "prep_upgrade_on_shutdown", 00:24:48.769 "value": false, 00:24:48.769 "unit": "", 00:24:48.769 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:48.769 } 00:24:48.769 ] 00:24:48.769 } 00:24:48.769 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:24:48.769 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:24:48.769 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:49.028 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:24:49.028 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:24:49.028 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:24:49.028 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:24:49.028 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:49.286 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:24:49.286 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:24:49.286 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:24:49.286 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:24:49.286 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:24:49.286 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:49.286 Validate MD5 checksum, iteration 1 00:24:49.286 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:24:49.286 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:49.286 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:49.286 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:49.286 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:49.286 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:49.286 16:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:49.286 [2024-07-20 16:00:23.947949] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:49.286 [2024-07-20 16:00:23.948076] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94072 ] 00:24:49.544 [2024-07-20 16:00:24.098882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.544 [2024-07-20 16:00:24.143266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.075  Copying: 737/1024 [MB] (737 MBps) Copying: 1024/1024 [MB] (average 716 MBps) 00:24:53.075 00:24:53.075 16:00:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:24:53.075 16:00:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:55.000 16:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:55.000 Validate MD5 checksum, iteration 2 00:24:55.000 16:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cab7ee4c8b411329e3ccfea62b7e5f5b 00:24:55.000 16:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cab7ee4c8b411329e3ccfea62b7e5f5b != \c\a\b\7\e\e\4\c\8\b\4\1\1\3\2\9\e\3\c\c\f\e\a\6\2\b\7\e\5\f\5\b ]] 00:24:55.000 16:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:55.000 16:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:55.000 16:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:24:55.000 16:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:55.000 16:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:55.000 16:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:55.000 16:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:55.000 16:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:55.000 16:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:55.000 [2024-07-20 16:00:29.466760] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:55.000 [2024-07-20 16:00:29.466884] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94139 ] 00:24:55.001 [2024-07-20 16:00:29.618432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.001 [2024-07-20 16:00:29.660352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.873  Copying: 732/1024 [MB] (732 MBps) Copying: 1024/1024 [MB] (average 726 MBps) 00:24:57.873 00:24:57.873 16:00:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:24:57.873 16:00:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:59.246 16:00:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=437d6c33f93793a16c57d5dec096f39e 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 437d6c33f93793a16c57d5dec096f39e != \4\3\7\d\6\c\3\3\f\9\3\7\9\3\a\1\6\c\5\7\d\5\d\e\c\0\9\6\f\3\9\e ]] 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 93990 ]] 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 93990 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94189 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94189 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 94189 ']' 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:59.246 16:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:59.504 [2024-07-20 16:00:34.100297] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:59.504 [2024-07-20 16:00:34.100436] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94189 ] 00:24:59.504 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 826: 93990 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:24:59.504 [2024-07-20 16:00:34.250221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.504 [2024-07-20 16:00:34.291447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.107 [2024-07-20 16:00:34.588991] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:00.107 [2024-07-20 16:00:34.589079] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:25:00.107 [2024-07-20 16:00:34.725621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.107 [2024-07-20 16:00:34.725665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:00.107 [2024-07-20 16:00:34.725681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:00.107 [2024-07-20 16:00:34.725691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.107 [2024-07-20 16:00:34.725753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.107 [2024-07-20 16:00:34.725767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:00.107 [2024-07-20 16:00:34.725777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:25:00.108 [2024-07-20 16:00:34.725792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.108 [2024-07-20 16:00:34.725821] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:00.108 [2024-07-20 16:00:34.726047] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:00.108 [2024-07-20 16:00:34.726072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.108 [2024-07-20 16:00:34.726082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:00.108 [2024-07-20 16:00:34.726093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:25:00.108 [2024-07-20 16:00:34.726103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.108 [2024-07-20 16:00:34.726435] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:25:00.108 [2024-07-20 16:00:34.730594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.108 [2024-07-20 16:00:34.730631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:25:00.108 [2024-07-20 16:00:34.730644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.166 ms 00:25:00.108 [2024-07-20 16:00:34.730659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.108 [2024-07-20 16:00:34.731711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.108 [2024-07-20 16:00:34.731739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:25:00.108 [2024-07-20 16:00:34.731751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:00.108 [2024-07-20 16:00:34.731761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.108 [2024-07-20 16:00:34.732164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.108 [2024-07-20 16:00:34.732188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:00.108 [2024-07-20 16:00:34.732199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.342 ms 00:25:00.108 [2024-07-20 16:00:34.732209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.108 [2024-07-20 16:00:34.732250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.108 [2024-07-20 16:00:34.732262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:00.108 [2024-07-20 16:00:34.732272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:00.108 [2024-07-20 16:00:34.732281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.108 [2024-07-20 16:00:34.732315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.108 [2024-07-20 16:00:34.732330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:00.108 [2024-07-20 16:00:34.732340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:25:00.108 [2024-07-20 16:00:34.732349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.108 [2024-07-20 16:00:34.732383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:00.108 [2024-07-20 16:00:34.733178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.108 [2024-07-20 16:00:34.733200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:00.108 [2024-07-20 16:00:34.733215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.801 ms 00:25:00.108 [2024-07-20 16:00:34.733236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.108 [2024-07-20 16:00:34.733269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.108 [2024-07-20 16:00:34.733280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:00.108 [2024-07-20 16:00:34.733290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:00.108 [2024-07-20 16:00:34.733299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.108 [2024-07-20 16:00:34.733320] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:25:00.108 [2024-07-20 16:00:34.733341] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:25:00.108 [2024-07-20 16:00:34.733391] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:25:00.108 [2024-07-20 16:00:34.733412] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:25:00.108 [2024-07-20 16:00:34.733492] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:00.108 [2024-07-20 16:00:34.733510] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:00.108 [2024-07-20 16:00:34.733523] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:25:00.108 [2024-07-20 16:00:34.733536] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:00.108 [2024-07-20 16:00:34.733547] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:00.108 [2024-07-20 16:00:34.733558] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:00.108 [2024-07-20 16:00:34.733578] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:00.108 [2024-07-20 16:00:34.733588] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:00.108 [2024-07-20 16:00:34.733598] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:00.108 [2024-07-20 16:00:34.733611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.108 [2024-07-20 16:00:34.733627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:00.108 [2024-07-20 16:00:34.733638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.293 ms 00:25:00.108 [2024-07-20 16:00:34.733647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.108 [2024-07-20 16:00:34.733716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.108 [2024-07-20 16:00:34.733726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:00.108 [2024-07-20 16:00:34.733736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:25:00.108 [2024-07-20 16:00:34.733745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.108 [2024-07-20 16:00:34.733843] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:00.108 [2024-07-20 16:00:34.733871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:00.108 [2024-07-20 16:00:34.733882] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:00.108 [2024-07-20 16:00:34.733892] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:00.108 [2024-07-20 16:00:34.733914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:00.108 [2024-07-20 16:00:34.733923] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:00.108 [2024-07-20 16:00:34.733933] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:00.108 [2024-07-20 16:00:34.733942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:00.108 [2024-07-20 16:00:34.733951] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:00.108 [2024-07-20 16:00:34.733960] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:00.108 [2024-07-20 16:00:34.733971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:00.108 [2024-07-20 16:00:34.733980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:00.108 [2024-07-20 16:00:34.733989] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:00.108 [2024-07-20 16:00:34.733998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:00.108 [2024-07-20 16:00:34.734007] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:00.108 [2024-07-20 16:00:34.734016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:00.108 [2024-07-20 16:00:34.734025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:00.108 [2024-07-20 16:00:34.734034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:00.108 [2024-07-20 16:00:34.734043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:00.108 [2024-07-20 16:00:34.734053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:00.108 [2024-07-20 16:00:34.734064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:00.108 [2024-07-20 16:00:34.734074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:00.108 [2024-07-20 16:00:34.734083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:00.108 [2024-07-20 16:00:34.734092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:00.108 [2024-07-20 16:00:34.734101] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:00.108 [2024-07-20 16:00:34.734110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:00.108 [2024-07-20 16:00:34.734119] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:00.108 [2024-07-20 16:00:34.734134] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:00.108 [2024-07-20 16:00:34.734143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:00.108 [2024-07-20 16:00:34.734152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:00.108 [2024-07-20 16:00:34.734161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:00.108 [2024-07-20 16:00:34.734170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:00.108 [2024-07-20 16:00:34.734179] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:00.108 [2024-07-20 16:00:34.734188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:00.108 [2024-07-20 16:00:34.734197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:00.108 [2024-07-20 16:00:34.734206] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:00.108 [2024-07-20 16:00:34.734218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:00.108 [2024-07-20 16:00:34.734228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:00.108 [2024-07-20 16:00:34.734237] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:00.108 [2024-07-20 16:00:34.734246] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:00.108 [2024-07-20 16:00:34.734255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:00.108 [2024-07-20 16:00:34.734264] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:00.108 [2024-07-20 16:00:34.734274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:00.108 [2024-07-20 16:00:34.734283] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:00.108 [2024-07-20 16:00:34.734299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:00.108 [2024-07-20 16:00:34.734308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:00.108 [2024-07-20 16:00:34.734318] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:00.108 [2024-07-20 16:00:34.734327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:00.108 [2024-07-20 16:00:34.734336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:00.108 [2024-07-20 16:00:34.734345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:00.108 [2024-07-20 16:00:34.734365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:00.108 [2024-07-20 16:00:34.734375] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:00.109 [2024-07-20 16:00:34.734390] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:00.109 [2024-07-20 16:00:34.734401] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:00.109 [2024-07-20 16:00:34.734413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:00.109 [2024-07-20 16:00:34.734431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:00.109 [2024-07-20 16:00:34.734442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:00.109 [2024-07-20 16:00:34.734452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:00.109 [2024-07-20 16:00:34.734463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:00.109 [2024-07-20 16:00:34.734473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:00.109 [2024-07-20 16:00:34.734484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:00.109 [2024-07-20 16:00:34.734494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:00.109 [2024-07-20 16:00:34.734504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:00.109 [2024-07-20 16:00:34.734514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:00.109 [2024-07-20 16:00:34.734524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:00.109 [2024-07-20 16:00:34.734534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:00.109 [2024-07-20 16:00:34.734544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:00.109 [2024-07-20 16:00:34.734554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:00.109 [2024-07-20 16:00:34.734568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:00.109 [2024-07-20 16:00:34.734578] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:00.109 [2024-07-20 16:00:34.734589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:00.109 [2024-07-20 16:00:34.734603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:00.109 [2024-07-20 16:00:34.734613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:00.109 [2024-07-20 16:00:34.734623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:00.109 [2024-07-20 16:00:34.734634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:00.109 [2024-07-20 16:00:34.734645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.734662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:00.109 [2024-07-20 16:00:34.734673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.854 ms 00:25:00.109 [2024-07-20 16:00:34.734682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.743977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.744013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:00.109 [2024-07-20 16:00:34.744025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.257 ms 00:25:00.109 [2024-07-20 16:00:34.744036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.744074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.744089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:00.109 [2024-07-20 16:00:34.744100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:25:00.109 [2024-07-20 16:00:34.744110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.754757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.754793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:00.109 [2024-07-20 16:00:34.754810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.607 ms 00:25:00.109 [2024-07-20 16:00:34.754828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.754876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.754887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:00.109 [2024-07-20 16:00:34.754898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:00.109 [2024-07-20 16:00:34.754908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.755012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.755024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:00.109 [2024-07-20 16:00:34.755034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:25:00.109 [2024-07-20 16:00:34.755045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.755081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.755104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:00.109 [2024-07-20 16:00:34.755114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:25:00.109 [2024-07-20 16:00:34.755124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.762272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.762325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:00.109 [2024-07-20 16:00:34.762340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.135 ms 00:25:00.109 [2024-07-20 16:00:34.762350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.762476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.762500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:25:00.109 [2024-07-20 16:00:34.762512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:00.109 [2024-07-20 16:00:34.762522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.779788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.779832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:25:00.109 [2024-07-20 16:00:34.779849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.272 ms 00:25:00.109 [2024-07-20 16:00:34.779862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.781245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.781287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:00.109 [2024-07-20 16:00:34.781302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.275 ms 00:25:00.109 [2024-07-20 16:00:34.781324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.802445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.802501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:25:00.109 [2024-07-20 16:00:34.802517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.098 ms 00:25:00.109 [2024-07-20 16:00:34.802528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.802700] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:25:00.109 [2024-07-20 16:00:34.802798] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:25:00.109 [2024-07-20 16:00:34.802902] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:25:00.109 [2024-07-20 16:00:34.802999] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:25:00.109 [2024-07-20 16:00:34.803015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.803032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:25:00.109 [2024-07-20 16:00:34.803044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.430 ms 00:25:00.109 [2024-07-20 16:00:34.803063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.803107] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:25:00.109 [2024-07-20 16:00:34.803130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.803140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:25:00.109 [2024-07-20 16:00:34.803151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:25:00.109 [2024-07-20 16:00:34.803161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.805785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.805820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:25:00.109 [2024-07-20 16:00:34.805837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.578 ms 00:25:00.109 [2024-07-20 16:00:34.805846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.806431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:00.109 [2024-07-20 16:00:34.806460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:25:00.109 [2024-07-20 16:00:34.806472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:25:00.109 [2024-07-20 16:00:34.806482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:00.109 [2024-07-20 16:00:34.806717] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:25:00.675 [2024-07-20 16:00:35.424037] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:25:00.675 [2024-07-20 16:00:35.424196] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:25:01.611 [2024-07-20 16:00:36.043255] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:25:01.611 [2024-07-20 16:00:36.043365] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:01.611 [2024-07-20 16:00:36.043383] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:25:01.611 [2024-07-20 16:00:36.043399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:01.611 [2024-07-20 16:00:36.043410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:25:01.611 [2024-07-20 16:00:36.043425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1238.888 ms 00:25:01.611 [2024-07-20 16:00:36.043436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:01.611 [2024-07-20 16:00:36.043471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:01.611 [2024-07-20 16:00:36.043482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:25:01.611 [2024-07-20 16:00:36.043492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:01.611 [2024-07-20 16:00:36.043510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:01.611 [2024-07-20 16:00:36.050611] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:01.611 [2024-07-20 16:00:36.050749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:01.611 [2024-07-20 16:00:36.050762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:01.611 [2024-07-20 16:00:36.050774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.233 ms 00:25:01.611 [2024-07-20 16:00:36.050784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:01.611 [2024-07-20 16:00:36.051371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:01.611 [2024-07-20 16:00:36.051402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:25:01.611 [2024-07-20 16:00:36.051414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.509 ms 00:25:01.611 [2024-07-20 16:00:36.051424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:01.611 [2024-07-20 16:00:36.053312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:01.611 [2024-07-20 16:00:36.053337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:25:01.611 [2024-07-20 16:00:36.053371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.872 ms 00:25:01.611 [2024-07-20 16:00:36.053382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:01.611 [2024-07-20 16:00:36.053422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:01.611 [2024-07-20 16:00:36.053438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:25:01.611 [2024-07-20 16:00:36.053448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:01.611 [2024-07-20 16:00:36.053459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:01.611 [2024-07-20 16:00:36.053582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:01.611 [2024-07-20 16:00:36.053594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:01.611 [2024-07-20 16:00:36.053604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:25:01.611 [2024-07-20 16:00:36.053614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:01.611 [2024-07-20 16:00:36.053636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:01.611 [2024-07-20 16:00:36.053646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:01.611 [2024-07-20 16:00:36.053659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:01.611 [2024-07-20 16:00:36.053668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:01.611 [2024-07-20 16:00:36.053708] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:25:01.611 [2024-07-20 16:00:36.053720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:01.611 [2024-07-20 16:00:36.053730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:25:01.611 [2024-07-20 16:00:36.053740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:25:01.611 [2024-07-20 16:00:36.053750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:01.611 [2024-07-20 16:00:36.053795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:01.611 [2024-07-20 16:00:36.053806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:01.611 [2024-07-20 16:00:36.053816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:25:01.611 [2024-07-20 16:00:36.053829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:01.611 [2024-07-20 16:00:36.054850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1330.996 ms, result 0 00:25:01.611 [2024-07-20 16:00:36.067164] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.611 [2024-07-20 16:00:36.083155] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:01.611 [2024-07-20 16:00:36.091251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:01.870 16:00:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:01.870 16:00:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:25:01.870 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:01.870 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:25:01.870 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:25:01.870 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:25:01.870 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:25:01.870 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:01.870 Validate MD5 checksum, iteration 1 00:25:01.870 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:25:01.870 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:01.871 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:01.871 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:01.871 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:01.871 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:01.871 16:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:01.871 [2024-07-20 16:00:36.662760] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:01.871 [2024-07-20 16:00:36.662916] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94227 ] 00:25:02.128 [2024-07-20 16:00:36.814813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.128 [2024-07-20 16:00:36.856997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.001  Copying: 730/1024 [MB] (730 MBps) Copying: 1024/1024 [MB] (average 723 MBps) 00:25:05.001 00:25:05.001 16:00:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:05.001 16:00:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:06.901 16:00:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:06.901 Validate MD5 checksum, iteration 2 00:25:06.901 16:00:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cab7ee4c8b411329e3ccfea62b7e5f5b 00:25:06.901 16:00:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cab7ee4c8b411329e3ccfea62b7e5f5b != \c\a\b\7\e\e\4\c\8\b\4\1\1\3\2\9\e\3\c\c\f\e\a\6\2\b\7\e\5\f\5\b ]] 00:25:06.901 16:00:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:06.901 16:00:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:06.901 16:00:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:06.901 16:00:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:06.901 16:00:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:06.901 16:00:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:06.901 16:00:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:06.901 16:00:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:06.901 16:00:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:06.901 [2024-07-20 16:00:41.322277] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:06.901 [2024-07-20 16:00:41.322405] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94278 ] 00:25:06.901 [2024-07-20 16:00:41.473625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.901 [2024-07-20 16:00:41.518349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.823  Copying: 729/1024 [MB] (729 MBps) Copying: 1024/1024 [MB] (average 726 MBps) 00:25:11.823 00:25:11.823 16:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:11.823 16:00:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=437d6c33f93793a16c57d5dec096f39e 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 437d6c33f93793a16c57d5dec096f39e != \4\3\7\d\6\c\3\3\f\9\3\7\9\3\a\1\6\c\5\7\d\5\d\e\c\0\9\6\f\3\9\e ]] 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94189 ]] 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94189 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 94189 ']' 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 94189 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:13.198 16:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94189 00:25:13.459 killing process with pid 94189 00:25:13.459 16:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:13.459 16:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:13.459 16:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94189' 00:25:13.459 16:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 94189 00:25:13.459 16:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 94189 00:25:13.459 [2024-07-20 16:00:48.154487] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:13.459 [2024-07-20 16:00:48.157877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.157922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:13.459 [2024-07-20 16:00:48.157937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:13.459 [2024-07-20 16:00:48.157947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.157972] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:13.459 [2024-07-20 16:00:48.158666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.158688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:13.459 [2024-07-20 16:00:48.158699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.680 ms 00:25:13.459 [2024-07-20 16:00:48.158709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.158935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.158948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:13.459 [2024-07-20 16:00:48.158959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:25:13.459 [2024-07-20 16:00:48.158972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.160105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.160143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:13.459 [2024-07-20 16:00:48.160155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.115 ms 00:25:13.459 [2024-07-20 16:00:48.160165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.161109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.161138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:25:13.459 [2024-07-20 16:00:48.161150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.912 ms 00:25:13.459 [2024-07-20 16:00:48.161166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.162668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.162705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:13.459 [2024-07-20 16:00:48.162718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.463 ms 00:25:13.459 [2024-07-20 16:00:48.162728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.164018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.164054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:13.459 [2024-07-20 16:00:48.164067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.258 ms 00:25:13.459 [2024-07-20 16:00:48.164096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.164168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.164181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:13.459 [2024-07-20 16:00:48.164192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:25:13.459 [2024-07-20 16:00:48.164202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.165586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.165620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:25:13.459 [2024-07-20 16:00:48.165632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.367 ms 00:25:13.459 [2024-07-20 16:00:48.165641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.166919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.166953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:25:13.459 [2024-07-20 16:00:48.166965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.247 ms 00:25:13.459 [2024-07-20 16:00:48.166975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.168161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.168196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:13.459 [2024-07-20 16:00:48.168208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.159 ms 00:25:13.459 [2024-07-20 16:00:48.168218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.169372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.169403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:13.459 [2024-07-20 16:00:48.169414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.099 ms 00:25:13.459 [2024-07-20 16:00:48.169424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.169453] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:13.459 [2024-07-20 16:00:48.169469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:13.459 [2024-07-20 16:00:48.169481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:13.459 [2024-07-20 16:00:48.169492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:13.459 [2024-07-20 16:00:48.169503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:13.459 [2024-07-20 16:00:48.169663] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:13.459 [2024-07-20 16:00:48.169686] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5b27914f-95a9-438a-8e33-df91801c691f 00:25:13.459 [2024-07-20 16:00:48.169697] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:13.459 [2024-07-20 16:00:48.169710] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:25:13.459 [2024-07-20 16:00:48.169725] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:25:13.459 [2024-07-20 16:00:48.169735] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:25:13.459 [2024-07-20 16:00:48.169744] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:13.459 [2024-07-20 16:00:48.169754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:13.459 [2024-07-20 16:00:48.169764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:13.459 [2024-07-20 16:00:48.169773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:13.459 [2024-07-20 16:00:48.169782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:13.459 [2024-07-20 16:00:48.169792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.169802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:13.459 [2024-07-20 16:00:48.169812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.340 ms 00:25:13.459 [2024-07-20 16:00:48.169823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.171529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.171564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:13.459 [2024-07-20 16:00:48.171576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.690 ms 00:25:13.459 [2024-07-20 16:00:48.171602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.171715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:13.459 [2024-07-20 16:00:48.171727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:13.459 [2024-07-20 16:00:48.171745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.091 ms 00:25:13.459 [2024-07-20 16:00:48.171754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.178776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.459 [2024-07-20 16:00:48.178806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:13.459 [2024-07-20 16:00:48.178842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.459 [2024-07-20 16:00:48.178853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.178884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.459 [2024-07-20 16:00:48.178895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:13.459 [2024-07-20 16:00:48.178905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.459 [2024-07-20 16:00:48.178915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.178997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.459 [2024-07-20 16:00:48.179010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:13.459 [2024-07-20 16:00:48.179021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.459 [2024-07-20 16:00:48.179031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.459 [2024-07-20 16:00:48.179049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.460 [2024-07-20 16:00:48.179060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:13.460 [2024-07-20 16:00:48.179069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.460 [2024-07-20 16:00:48.179079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.460 [2024-07-20 16:00:48.190652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.460 [2024-07-20 16:00:48.190693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:13.460 [2024-07-20 16:00:48.190722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.460 [2024-07-20 16:00:48.190732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.460 [2024-07-20 16:00:48.198923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.460 [2024-07-20 16:00:48.198958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:13.460 [2024-07-20 16:00:48.198971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.460 [2024-07-20 16:00:48.198981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.460 [2024-07-20 16:00:48.199043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.460 [2024-07-20 16:00:48.199060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:13.460 [2024-07-20 16:00:48.199071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.460 [2024-07-20 16:00:48.199080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.460 [2024-07-20 16:00:48.199116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.460 [2024-07-20 16:00:48.199127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:13.460 [2024-07-20 16:00:48.199137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.460 [2024-07-20 16:00:48.199147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.460 [2024-07-20 16:00:48.199218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.460 [2024-07-20 16:00:48.199234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:13.460 [2024-07-20 16:00:48.199244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.460 [2024-07-20 16:00:48.199253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.460 [2024-07-20 16:00:48.199287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.460 [2024-07-20 16:00:48.199306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:13.460 [2024-07-20 16:00:48.199331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.460 [2024-07-20 16:00:48.199348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.460 [2024-07-20 16:00:48.199449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.460 [2024-07-20 16:00:48.199462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:13.460 [2024-07-20 16:00:48.199476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.460 [2024-07-20 16:00:48.199486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.460 [2024-07-20 16:00:48.199534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:13.460 [2024-07-20 16:00:48.199546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:13.460 [2024-07-20 16:00:48.199556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:13.460 [2024-07-20 16:00:48.199566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:13.460 [2024-07-20 16:00:48.199700] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 41.857 ms, result 0 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:13.719 Remove shared memory files 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid93990 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:13.719 ************************************ 00:25:13.719 END TEST ftl_upgrade_shutdown 00:25:13.719 ************************************ 00:25:13.719 00:25:13.719 real 1m9.202s 00:25:13.719 user 1m30.341s 00:25:13.719 sys 0m19.889s 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:13.719 16:00:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:13.977 16:00:48 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:25:13.977 16:00:48 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:25:13.977 16:00:48 ftl -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:25:13.977 16:00:48 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:13.977 16:00:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:13.977 ************************************ 00:25:13.977 START TEST ftl_restore_fast 00:25:13.977 ************************************ 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:25:13.977 * Looking for test storage... 00:25:13.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:13.977 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.zIKA2m27Ah 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:13.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94426 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94426 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- common/autotest_common.sh@827 -- # '[' -z 94426 ']' 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:13.978 16:00:48 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:25:14.236 [2024-07-20 16:00:48.809515] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:14.236 [2024-07-20 16:00:48.809661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94426 ] 00:25:14.236 [2024-07-20 16:00:48.946396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.236 [2024-07-20 16:00:48.990723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.803 16:00:49 ftl.ftl_restore_fast -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:14.803 16:00:49 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # return 0 00:25:14.803 16:00:49 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:14.803 16:00:49 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:25:14.803 16:00:49 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:14.803 16:00:49 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:25:14.803 16:00:49 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:25:14.803 16:00:49 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:15.062 16:00:49 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:15.062 16:00:49 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:25:15.062 16:00:49 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:15.062 16:00:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:25:15.062 16:00:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:15.062 16:00:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:15.062 16:00:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:15.062 16:00:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:15.321 16:00:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:15.321 { 00:25:15.321 "name": "nvme0n1", 00:25:15.321 "aliases": [ 00:25:15.321 "7f4c65ae-2452-4e97-939b-cb16383c179b" 00:25:15.321 ], 00:25:15.321 "product_name": "NVMe disk", 00:25:15.321 "block_size": 4096, 00:25:15.321 "num_blocks": 1310720, 00:25:15.321 "uuid": "7f4c65ae-2452-4e97-939b-cb16383c179b", 00:25:15.321 "assigned_rate_limits": { 00:25:15.321 "rw_ios_per_sec": 0, 00:25:15.321 "rw_mbytes_per_sec": 0, 00:25:15.321 "r_mbytes_per_sec": 0, 00:25:15.321 "w_mbytes_per_sec": 0 00:25:15.321 }, 00:25:15.321 "claimed": true, 00:25:15.321 "claim_type": "read_many_write_one", 00:25:15.321 "zoned": false, 00:25:15.321 "supported_io_types": { 00:25:15.321 "read": true, 00:25:15.321 "write": true, 00:25:15.321 "unmap": true, 00:25:15.321 "write_zeroes": true, 00:25:15.321 "flush": true, 00:25:15.321 "reset": true, 00:25:15.321 "compare": true, 00:25:15.321 "compare_and_write": false, 00:25:15.321 "abort": true, 00:25:15.321 "nvme_admin": true, 00:25:15.321 "nvme_io": true 00:25:15.321 }, 00:25:15.321 "driver_specific": { 00:25:15.321 "nvme": [ 00:25:15.321 { 00:25:15.321 "pci_address": "0000:00:11.0", 00:25:15.321 "trid": { 00:25:15.321 "trtype": "PCIe", 00:25:15.321 "traddr": "0000:00:11.0" 00:25:15.321 }, 00:25:15.321 "ctrlr_data": { 00:25:15.321 "cntlid": 0, 00:25:15.321 "vendor_id": "0x1b36", 00:25:15.321 "model_number": "QEMU NVMe Ctrl", 00:25:15.321 "serial_number": "12341", 00:25:15.321 "firmware_revision": "8.0.0", 00:25:15.321 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:15.321 "oacs": { 00:25:15.321 "security": 0, 00:25:15.321 "format": 1, 00:25:15.321 "firmware": 0, 00:25:15.321 "ns_manage": 1 00:25:15.321 }, 00:25:15.321 "multi_ctrlr": false, 00:25:15.321 "ana_reporting": false 00:25:15.321 }, 00:25:15.321 "vs": { 00:25:15.321 "nvme_version": "1.4" 00:25:15.321 }, 00:25:15.321 "ns_data": { 00:25:15.322 "id": 1, 00:25:15.322 "can_share": false 00:25:15.322 } 00:25:15.322 } 00:25:15.322 ], 00:25:15.322 "mp_policy": "active_passive" 00:25:15.322 } 00:25:15.322 } 00:25:15.322 ]' 00:25:15.322 16:00:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:15.322 16:00:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:15.322 16:00:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:15.322 16:00:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=1310720 00:25:15.322 16:00:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:25:15.322 16:00:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 5120 00:25:15.322 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:25:15.322 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:15.322 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:25:15.322 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:15.322 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:15.659 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=42a28e76-fa6e-4471-9742-9ca3f8351f3e 00:25:15.659 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:25:15.659 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42a28e76-fa6e-4471-9742-9ca3f8351f3e 00:25:15.659 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:15.930 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=e358f2f2-6fdf-4a86-9e1d-5485e896949b 00:25:15.930 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e358f2f2-6fdf-4a86-9e1d-5485e896949b 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=c453e7b5-5edd-4899-9b66-a2b1d956cfc0 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c453e7b5-5edd-4899-9b66-a2b1d956cfc0 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=c453e7b5-5edd-4899-9b66-a2b1d956cfc0 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size c453e7b5-5edd-4899-9b66-a2b1d956cfc0 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=c453e7b5-5edd-4899-9b66-a2b1d956cfc0 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c453e7b5-5edd-4899-9b66-a2b1d956cfc0 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:16.188 { 00:25:16.188 "name": "c453e7b5-5edd-4899-9b66-a2b1d956cfc0", 00:25:16.188 "aliases": [ 00:25:16.188 "lvs/nvme0n1p0" 00:25:16.188 ], 00:25:16.188 "product_name": "Logical Volume", 00:25:16.188 "block_size": 4096, 00:25:16.188 "num_blocks": 26476544, 00:25:16.188 "uuid": "c453e7b5-5edd-4899-9b66-a2b1d956cfc0", 00:25:16.188 "assigned_rate_limits": { 00:25:16.188 "rw_ios_per_sec": 0, 00:25:16.188 "rw_mbytes_per_sec": 0, 00:25:16.188 "r_mbytes_per_sec": 0, 00:25:16.188 "w_mbytes_per_sec": 0 00:25:16.188 }, 00:25:16.188 "claimed": false, 00:25:16.188 "zoned": false, 00:25:16.188 "supported_io_types": { 00:25:16.188 "read": true, 00:25:16.188 "write": true, 00:25:16.188 "unmap": true, 00:25:16.188 "write_zeroes": true, 00:25:16.188 "flush": false, 00:25:16.188 "reset": true, 00:25:16.188 "compare": false, 00:25:16.188 "compare_and_write": false, 00:25:16.188 "abort": false, 00:25:16.188 "nvme_admin": false, 00:25:16.188 "nvme_io": false 00:25:16.188 }, 00:25:16.188 "driver_specific": { 00:25:16.188 "lvol": { 00:25:16.188 "lvol_store_uuid": "e358f2f2-6fdf-4a86-9e1d-5485e896949b", 00:25:16.188 "base_bdev": "nvme0n1", 00:25:16.188 "thin_provision": true, 00:25:16.188 "num_allocated_clusters": 0, 00:25:16.188 "snapshot": false, 00:25:16.188 "clone": false, 00:25:16.188 "esnap_clone": false 00:25:16.188 } 00:25:16.188 } 00:25:16.188 } 00:25:16.188 ]' 00:25:16.188 16:00:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:16.445 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:16.445 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:16.445 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:25:16.445 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:25:16.445 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:25:16.445 16:00:51 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:25:16.445 16:00:51 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:25:16.445 16:00:51 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:16.703 16:00:51 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:16.703 16:00:51 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:16.703 16:00:51 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size c453e7b5-5edd-4899-9b66-a2b1d956cfc0 00:25:16.703 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=c453e7b5-5edd-4899-9b66-a2b1d956cfc0 00:25:16.703 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:16.703 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:16.703 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:16.703 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c453e7b5-5edd-4899-9b66-a2b1d956cfc0 00:25:16.703 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:16.703 { 00:25:16.703 "name": "c453e7b5-5edd-4899-9b66-a2b1d956cfc0", 00:25:16.703 "aliases": [ 00:25:16.703 "lvs/nvme0n1p0" 00:25:16.703 ], 00:25:16.703 "product_name": "Logical Volume", 00:25:16.703 "block_size": 4096, 00:25:16.703 "num_blocks": 26476544, 00:25:16.703 "uuid": "c453e7b5-5edd-4899-9b66-a2b1d956cfc0", 00:25:16.703 "assigned_rate_limits": { 00:25:16.703 "rw_ios_per_sec": 0, 00:25:16.703 "rw_mbytes_per_sec": 0, 00:25:16.703 "r_mbytes_per_sec": 0, 00:25:16.703 "w_mbytes_per_sec": 0 00:25:16.703 }, 00:25:16.703 "claimed": false, 00:25:16.703 "zoned": false, 00:25:16.703 "supported_io_types": { 00:25:16.703 "read": true, 00:25:16.703 "write": true, 00:25:16.703 "unmap": true, 00:25:16.703 "write_zeroes": true, 00:25:16.703 "flush": false, 00:25:16.703 "reset": true, 00:25:16.703 "compare": false, 00:25:16.703 "compare_and_write": false, 00:25:16.703 "abort": false, 00:25:16.703 "nvme_admin": false, 00:25:16.703 "nvme_io": false 00:25:16.703 }, 00:25:16.703 "driver_specific": { 00:25:16.703 "lvol": { 00:25:16.703 "lvol_store_uuid": "e358f2f2-6fdf-4a86-9e1d-5485e896949b", 00:25:16.703 "base_bdev": "nvme0n1", 00:25:16.703 "thin_provision": true, 00:25:16.703 "num_allocated_clusters": 0, 00:25:16.703 "snapshot": false, 00:25:16.703 "clone": false, 00:25:16.703 "esnap_clone": false 00:25:16.703 } 00:25:16.703 } 00:25:16.703 } 00:25:16.703 ]' 00:25:16.703 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:16.704 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:16.704 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:16.961 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:25:16.961 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:25:16.961 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:25:16.961 16:00:51 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:25:16.961 16:00:51 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:16.961 16:00:51 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:25:16.961 16:00:51 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size c453e7b5-5edd-4899-9b66-a2b1d956cfc0 00:25:16.961 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=c453e7b5-5edd-4899-9b66-a2b1d956cfc0 00:25:16.961 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:16.961 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:16.961 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:16.961 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c453e7b5-5edd-4899-9b66-a2b1d956cfc0 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:17.218 { 00:25:17.218 "name": "c453e7b5-5edd-4899-9b66-a2b1d956cfc0", 00:25:17.218 "aliases": [ 00:25:17.218 "lvs/nvme0n1p0" 00:25:17.218 ], 00:25:17.218 "product_name": "Logical Volume", 00:25:17.218 "block_size": 4096, 00:25:17.218 "num_blocks": 26476544, 00:25:17.218 "uuid": "c453e7b5-5edd-4899-9b66-a2b1d956cfc0", 00:25:17.218 "assigned_rate_limits": { 00:25:17.218 "rw_ios_per_sec": 0, 00:25:17.218 "rw_mbytes_per_sec": 0, 00:25:17.218 "r_mbytes_per_sec": 0, 00:25:17.218 "w_mbytes_per_sec": 0 00:25:17.218 }, 00:25:17.218 "claimed": false, 00:25:17.218 "zoned": false, 00:25:17.218 "supported_io_types": { 00:25:17.218 "read": true, 00:25:17.218 "write": true, 00:25:17.218 "unmap": true, 00:25:17.218 "write_zeroes": true, 00:25:17.218 "flush": false, 00:25:17.218 "reset": true, 00:25:17.218 "compare": false, 00:25:17.218 "compare_and_write": false, 00:25:17.218 "abort": false, 00:25:17.218 "nvme_admin": false, 00:25:17.218 "nvme_io": false 00:25:17.218 }, 00:25:17.218 "driver_specific": { 00:25:17.218 "lvol": { 00:25:17.218 "lvol_store_uuid": "e358f2f2-6fdf-4a86-9e1d-5485e896949b", 00:25:17.218 "base_bdev": "nvme0n1", 00:25:17.218 "thin_provision": true, 00:25:17.218 "num_allocated_clusters": 0, 00:25:17.218 "snapshot": false, 00:25:17.218 "clone": false, 00:25:17.218 "esnap_clone": false 00:25:17.218 } 00:25:17.218 } 00:25:17.218 } 00:25:17.218 ]' 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c453e7b5-5edd-4899-9b66-a2b1d956cfc0 --l2p_dram_limit 10' 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:25:17.218 16:00:51 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c453e7b5-5edd-4899-9b66-a2b1d956cfc0 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:25:17.476 [2024-07-20 16:00:52.130455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.476 [2024-07-20 16:00:52.130503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:17.476 [2024-07-20 16:00:52.130522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:17.476 [2024-07-20 16:00:52.130534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.476 [2024-07-20 16:00:52.130604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.476 [2024-07-20 16:00:52.130616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:17.476 [2024-07-20 16:00:52.130629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:17.476 [2024-07-20 16:00:52.130649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.476 [2024-07-20 16:00:52.130684] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:17.476 [2024-07-20 16:00:52.130959] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:17.476 [2024-07-20 16:00:52.130989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.476 [2024-07-20 16:00:52.131002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:17.476 [2024-07-20 16:00:52.131016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:25:17.476 [2024-07-20 16:00:52.131026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.476 [2024-07-20 16:00:52.131197] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 68669144-5c5a-4b7c-bb83-844974b2e831 00:25:17.476 [2024-07-20 16:00:52.132610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.476 [2024-07-20 16:00:52.132640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:17.476 [2024-07-20 16:00:52.132659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:17.476 [2024-07-20 16:00:52.132676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.476 [2024-07-20 16:00:52.140127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.476 [2024-07-20 16:00:52.140161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:17.476 [2024-07-20 16:00:52.140174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.423 ms 00:25:17.476 [2024-07-20 16:00:52.140186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.476 [2024-07-20 16:00:52.140270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.476 [2024-07-20 16:00:52.140291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:17.476 [2024-07-20 16:00:52.140302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:17.476 [2024-07-20 16:00:52.140323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.476 [2024-07-20 16:00:52.140393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.476 [2024-07-20 16:00:52.140408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:17.476 [2024-07-20 16:00:52.140419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:17.476 [2024-07-20 16:00:52.140438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.476 [2024-07-20 16:00:52.140464] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:17.476 [2024-07-20 16:00:52.142279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.476 [2024-07-20 16:00:52.142313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:17.476 [2024-07-20 16:00:52.142327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.823 ms 00:25:17.476 [2024-07-20 16:00:52.142338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.476 [2024-07-20 16:00:52.142410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.476 [2024-07-20 16:00:52.142424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:17.476 [2024-07-20 16:00:52.142444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:17.476 [2024-07-20 16:00:52.142454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.476 [2024-07-20 16:00:52.142478] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:17.476 [2024-07-20 16:00:52.142619] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:17.476 [2024-07-20 16:00:52.142637] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:17.476 [2024-07-20 16:00:52.142650] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:17.476 [2024-07-20 16:00:52.142666] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:17.476 [2024-07-20 16:00:52.142677] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:17.476 [2024-07-20 16:00:52.142691] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:17.476 [2024-07-20 16:00:52.142701] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:17.476 [2024-07-20 16:00:52.142715] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:17.476 [2024-07-20 16:00:52.142725] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:17.476 [2024-07-20 16:00:52.142737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.476 [2024-07-20 16:00:52.142747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:17.477 [2024-07-20 16:00:52.142767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:25:17.477 [2024-07-20 16:00:52.142777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.477 [2024-07-20 16:00:52.142850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.477 [2024-07-20 16:00:52.142860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:17.477 [2024-07-20 16:00:52.142876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:17.477 [2024-07-20 16:00:52.142886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.477 [2024-07-20 16:00:52.142973] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:17.477 [2024-07-20 16:00:52.142984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:17.477 [2024-07-20 16:00:52.142997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:17.477 [2024-07-20 16:00:52.143007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.477 [2024-07-20 16:00:52.143021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:17.477 [2024-07-20 16:00:52.143030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:17.477 [2024-07-20 16:00:52.143042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:17.477 [2024-07-20 16:00:52.143051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:17.477 [2024-07-20 16:00:52.143063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:17.477 [2024-07-20 16:00:52.143072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:17.477 [2024-07-20 16:00:52.143083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:17.477 [2024-07-20 16:00:52.143093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:17.477 [2024-07-20 16:00:52.143104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:17.477 [2024-07-20 16:00:52.143113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:17.477 [2024-07-20 16:00:52.143127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:17.477 [2024-07-20 16:00:52.143136] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.477 [2024-07-20 16:00:52.143147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:17.477 [2024-07-20 16:00:52.143157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:17.477 [2024-07-20 16:00:52.143168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.477 [2024-07-20 16:00:52.143177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:17.477 [2024-07-20 16:00:52.143188] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:17.477 [2024-07-20 16:00:52.143197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.477 [2024-07-20 16:00:52.143208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:17.477 [2024-07-20 16:00:52.143217] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:17.477 [2024-07-20 16:00:52.143229] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.477 [2024-07-20 16:00:52.143239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:17.477 [2024-07-20 16:00:52.143250] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:17.477 [2024-07-20 16:00:52.143259] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.477 [2024-07-20 16:00:52.143270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:17.477 [2024-07-20 16:00:52.143278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:17.477 [2024-07-20 16:00:52.143293] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.477 [2024-07-20 16:00:52.143302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:17.477 [2024-07-20 16:00:52.143313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:17.477 [2024-07-20 16:00:52.143322] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:17.477 [2024-07-20 16:00:52.143333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:17.477 [2024-07-20 16:00:52.143342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:17.477 [2024-07-20 16:00:52.143364] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:17.477 [2024-07-20 16:00:52.143374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:17.477 [2024-07-20 16:00:52.143386] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:17.477 [2024-07-20 16:00:52.143394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.477 [2024-07-20 16:00:52.143406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:17.477 [2024-07-20 16:00:52.143416] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:17.477 [2024-07-20 16:00:52.143427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.477 [2024-07-20 16:00:52.143435] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:17.477 [2024-07-20 16:00:52.143448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:17.477 [2024-07-20 16:00:52.143458] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:17.477 [2024-07-20 16:00:52.143472] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.477 [2024-07-20 16:00:52.143485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:17.477 [2024-07-20 16:00:52.143497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:17.477 [2024-07-20 16:00:52.143506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:17.477 [2024-07-20 16:00:52.143518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:17.477 [2024-07-20 16:00:52.143526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:17.477 [2024-07-20 16:00:52.143538] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:17.477 [2024-07-20 16:00:52.143551] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:17.477 [2024-07-20 16:00:52.143567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:17.477 [2024-07-20 16:00:52.143589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:17.477 [2024-07-20 16:00:52.143603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:17.477 [2024-07-20 16:00:52.143613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:17.477 [2024-07-20 16:00:52.143626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:17.477 [2024-07-20 16:00:52.143636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:17.477 [2024-07-20 16:00:52.143649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:17.477 [2024-07-20 16:00:52.143659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:17.477 [2024-07-20 16:00:52.143674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:17.477 [2024-07-20 16:00:52.143684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:17.477 [2024-07-20 16:00:52.143696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:17.477 [2024-07-20 16:00:52.143706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:17.477 [2024-07-20 16:00:52.143718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:17.477 [2024-07-20 16:00:52.143728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:17.477 [2024-07-20 16:00:52.143740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:17.477 [2024-07-20 16:00:52.143750] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:17.477 [2024-07-20 16:00:52.143763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:17.477 [2024-07-20 16:00:52.143774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:17.477 [2024-07-20 16:00:52.143787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:17.477 [2024-07-20 16:00:52.143797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:17.477 [2024-07-20 16:00:52.143810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:17.477 [2024-07-20 16:00:52.143820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.477 [2024-07-20 16:00:52.143832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:17.477 [2024-07-20 16:00:52.143842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:25:17.477 [2024-07-20 16:00:52.143857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.477 [2024-07-20 16:00:52.143899] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:17.477 [2024-07-20 16:00:52.143914] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:20.759 [2024-07-20 16:00:55.416833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.759 [2024-07-20 16:00:55.416920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:20.759 [2024-07-20 16:00:55.416952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3278.246 ms 00:25:20.759 [2024-07-20 16:00:55.416966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.759 [2024-07-20 16:00:55.428190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.759 [2024-07-20 16:00:55.428242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:20.759 [2024-07-20 16:00:55.428257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.124 ms 00:25:20.759 [2024-07-20 16:00:55.428285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.759 [2024-07-20 16:00:55.428393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.759 [2024-07-20 16:00:55.428426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:20.759 [2024-07-20 16:00:55.428436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:20.759 [2024-07-20 16:00:55.428456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.759 [2024-07-20 16:00:55.438909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.759 [2024-07-20 16:00:55.438952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:20.759 [2024-07-20 16:00:55.438983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.417 ms 00:25:20.759 [2024-07-20 16:00:55.438995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.759 [2024-07-20 16:00:55.439033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.759 [2024-07-20 16:00:55.439046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:20.759 [2024-07-20 16:00:55.439057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:20.759 [2024-07-20 16:00:55.439069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.759 [2024-07-20 16:00:55.439544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.759 [2024-07-20 16:00:55.439569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:20.759 [2024-07-20 16:00:55.439580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:25:20.759 [2024-07-20 16:00:55.439592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.759 [2024-07-20 16:00:55.439699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.759 [2024-07-20 16:00:55.439720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:20.759 [2024-07-20 16:00:55.439731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:25:20.759 [2024-07-20 16:00:55.439743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.759 [2024-07-20 16:00:55.447010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.759 [2024-07-20 16:00:55.447048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:20.759 [2024-07-20 16:00:55.447061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.258 ms 00:25:20.759 [2024-07-20 16:00:55.447081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.759 [2024-07-20 16:00:55.454470] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:20.759 [2024-07-20 16:00:55.457692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.759 [2024-07-20 16:00:55.457721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:20.759 [2024-07-20 16:00:55.457735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.542 ms 00:25:20.759 [2024-07-20 16:00:55.457752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.017 [2024-07-20 16:00:55.594214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.017 [2024-07-20 16:00:55.594300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:21.017 [2024-07-20 16:00:55.594320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 136.628 ms 00:25:21.017 [2024-07-20 16:00:55.594335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.017 [2024-07-20 16:00:55.594530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.017 [2024-07-20 16:00:55.594545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:21.017 [2024-07-20 16:00:55.594571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:25:21.017 [2024-07-20 16:00:55.594581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.017 [2024-07-20 16:00:55.599454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.017 [2024-07-20 16:00:55.599489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:21.017 [2024-07-20 16:00:55.599515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.826 ms 00:25:21.017 [2024-07-20 16:00:55.599550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.017 [2024-07-20 16:00:55.602870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.018 [2024-07-20 16:00:55.602904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:21.018 [2024-07-20 16:00:55.602919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.281 ms 00:25:21.018 [2024-07-20 16:00:55.602929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.018 [2024-07-20 16:00:55.603203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.018 [2024-07-20 16:00:55.603218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:21.018 [2024-07-20 16:00:55.603232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:25:21.018 [2024-07-20 16:00:55.603242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.018 [2024-07-20 16:00:55.675993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.018 [2024-07-20 16:00:55.676035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:21.018 [2024-07-20 16:00:55.676052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.823 ms 00:25:21.018 [2024-07-20 16:00:55.676066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.018 [2024-07-20 16:00:55.681291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.018 [2024-07-20 16:00:55.681327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:21.018 [2024-07-20 16:00:55.681343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.188 ms 00:25:21.018 [2024-07-20 16:00:55.681364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.018 [2024-07-20 16:00:55.684795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.018 [2024-07-20 16:00:55.684827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:21.018 [2024-07-20 16:00:55.684842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.391 ms 00:25:21.018 [2024-07-20 16:00:55.684852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.018 [2024-07-20 16:00:55.688431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.018 [2024-07-20 16:00:55.688464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:21.018 [2024-07-20 16:00:55.688480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.543 ms 00:25:21.018 [2024-07-20 16:00:55.688490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.018 [2024-07-20 16:00:55.688544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.018 [2024-07-20 16:00:55.688557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:21.018 [2024-07-20 16:00:55.688571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:21.018 [2024-07-20 16:00:55.688581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.018 [2024-07-20 16:00:55.688681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.018 [2024-07-20 16:00:55.688696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:21.018 [2024-07-20 16:00:55.688710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:21.018 [2024-07-20 16:00:55.688719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.018 [2024-07-20 16:00:55.689854] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3564.828 ms, result 0 00:25:21.018 { 00:25:21.018 "name": "ftl0", 00:25:21.018 "uuid": "68669144-5c5a-4b7c-bb83-844974b2e831" 00:25:21.018 } 00:25:21.018 16:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:25:21.018 16:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:21.275 16:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:25:21.275 16:00:55 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:21.275 [2024-07-20 16:00:56.052495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.275 [2024-07-20 16:00:56.052540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:21.275 [2024-07-20 16:00:56.052560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:21.275 [2024-07-20 16:00:56.052591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.275 [2024-07-20 16:00:56.052616] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:21.275 [2024-07-20 16:00:56.053330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.275 [2024-07-20 16:00:56.053350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:21.275 [2024-07-20 16:00:56.053379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:25:21.275 [2024-07-20 16:00:56.053390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.275 [2024-07-20 16:00:56.053609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.275 [2024-07-20 16:00:56.053627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:21.275 [2024-07-20 16:00:56.053641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:25:21.275 [2024-07-20 16:00:56.053650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.275 [2024-07-20 16:00:56.056149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.275 [2024-07-20 16:00:56.056182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:21.275 [2024-07-20 16:00:56.056197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.480 ms 00:25:21.275 [2024-07-20 16:00:56.056206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.275 [2024-07-20 16:00:56.061213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.275 [2024-07-20 16:00:56.061245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:21.275 [2024-07-20 16:00:56.061259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.990 ms 00:25:21.275 [2024-07-20 16:00:56.061269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.275 [2024-07-20 16:00:56.062929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.275 [2024-07-20 16:00:56.062965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:21.275 [2024-07-20 16:00:56.062983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.569 ms 00:25:21.275 [2024-07-20 16:00:56.062993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.275 [2024-07-20 16:00:56.067701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.275 [2024-07-20 16:00:56.067740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:21.275 [2024-07-20 16:00:56.067756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.677 ms 00:25:21.275 [2024-07-20 16:00:56.067766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.275 [2024-07-20 16:00:56.067883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.275 [2024-07-20 16:00:56.067896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:21.275 [2024-07-20 16:00:56.067909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:25:21.275 [2024-07-20 16:00:56.067922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.534 [2024-07-20 16:00:56.069835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.534 [2024-07-20 16:00:56.069868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:21.534 [2024-07-20 16:00:56.069882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.893 ms 00:25:21.534 [2024-07-20 16:00:56.069892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.534 [2024-07-20 16:00:56.071369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.534 [2024-07-20 16:00:56.071402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:21.534 [2024-07-20 16:00:56.071419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:25:21.534 [2024-07-20 16:00:56.071429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.534 [2024-07-20 16:00:56.072551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.534 [2024-07-20 16:00:56.072584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:21.534 [2024-07-20 16:00:56.072598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.087 ms 00:25:21.534 [2024-07-20 16:00:56.072608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.534 [2024-07-20 16:00:56.073871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.534 [2024-07-20 16:00:56.073903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:21.534 [2024-07-20 16:00:56.073917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.209 ms 00:25:21.534 [2024-07-20 16:00:56.073926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.534 [2024-07-20 16:00:56.073959] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:21.534 [2024-07-20 16:00:56.073976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.073993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:21.534 [2024-07-20 16:00:56.074795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.074997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:21.535 [2024-07-20 16:00:56.075218] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:21.535 [2024-07-20 16:00:56.075242] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68669144-5c5a-4b7c-bb83-844974b2e831 00:25:21.535 [2024-07-20 16:00:56.075253] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:21.535 [2024-07-20 16:00:56.075276] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:21.535 [2024-07-20 16:00:56.075286] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:21.535 [2024-07-20 16:00:56.075299] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:21.535 [2024-07-20 16:00:56.075308] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:21.535 [2024-07-20 16:00:56.075321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:21.535 [2024-07-20 16:00:56.075333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:21.535 [2024-07-20 16:00:56.075345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:21.535 [2024-07-20 16:00:56.075370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:21.535 [2024-07-20 16:00:56.075384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.535 [2024-07-20 16:00:56.075394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:21.535 [2024-07-20 16:00:56.075407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:25:21.535 [2024-07-20 16:00:56.075417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.077189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.535 [2024-07-20 16:00:56.077210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:21.535 [2024-07-20 16:00:56.077227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:25:21.535 [2024-07-20 16:00:56.077237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.077349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.535 [2024-07-20 16:00:56.077370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:21.535 [2024-07-20 16:00:56.077384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:25:21.535 [2024-07-20 16:00:56.077393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.084527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.535 [2024-07-20 16:00:56.084553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:21.535 [2024-07-20 16:00:56.084568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.535 [2024-07-20 16:00:56.084581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.084642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.535 [2024-07-20 16:00:56.084652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:21.535 [2024-07-20 16:00:56.084665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.535 [2024-07-20 16:00:56.084675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.084741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.535 [2024-07-20 16:00:56.084753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:21.535 [2024-07-20 16:00:56.084768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.535 [2024-07-20 16:00:56.084778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.084801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.535 [2024-07-20 16:00:56.084811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:21.535 [2024-07-20 16:00:56.084823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.535 [2024-07-20 16:00:56.084832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.096570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.535 [2024-07-20 16:00:56.096620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:21.535 [2024-07-20 16:00:56.096635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.535 [2024-07-20 16:00:56.096645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.104970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.535 [2024-07-20 16:00:56.105004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:21.535 [2024-07-20 16:00:56.105019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.535 [2024-07-20 16:00:56.105029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.105116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.535 [2024-07-20 16:00:56.105128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:21.535 [2024-07-20 16:00:56.105145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.535 [2024-07-20 16:00:56.105154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.105194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.535 [2024-07-20 16:00:56.105208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:21.535 [2024-07-20 16:00:56.105222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.535 [2024-07-20 16:00:56.105231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.105313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.535 [2024-07-20 16:00:56.105325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:21.535 [2024-07-20 16:00:56.105338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.535 [2024-07-20 16:00:56.105347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.105412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.535 [2024-07-20 16:00:56.105424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:21.535 [2024-07-20 16:00:56.105440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.535 [2024-07-20 16:00:56.105449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.105498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.535 [2024-07-20 16:00:56.105509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:21.535 [2024-07-20 16:00:56.105524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.535 [2024-07-20 16:00:56.105533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.105588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.535 [2024-07-20 16:00:56.105603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:21.535 [2024-07-20 16:00:56.105615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.535 [2024-07-20 16:00:56.105631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.535 [2024-07-20 16:00:56.105764] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.312 ms, result 0 00:25:21.535 true 00:25:21.535 16:00:56 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94426 00:25:21.535 16:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 94426 ']' 00:25:21.535 16:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 94426 00:25:21.535 16:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # uname 00:25:21.535 16:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:21.535 16:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94426 00:25:21.535 16:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:21.535 killing process with pid 94426 00:25:21.535 16:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:21.535 16:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94426' 00:25:21.535 16:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@965 -- # kill 94426 00:25:21.535 16:00:56 ftl.ftl_restore_fast -- common/autotest_common.sh@970 -- # wait 94426 00:25:24.814 16:00:59 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:25:28.104 262144+0 records in 00:25:28.104 262144+0 records out 00:25:28.104 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.71755 s, 289 MB/s 00:25:28.104 16:01:02 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:30.007 16:01:04 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:30.007 [2024-07-20 16:01:04.526546] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:30.007 [2024-07-20 16:01:04.526673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94624 ] 00:25:30.007 [2024-07-20 16:01:04.676275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.007 [2024-07-20 16:01:04.717216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.266 [2024-07-20 16:01:04.817849] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:30.266 [2024-07-20 16:01:04.817914] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:30.266 [2024-07-20 16:01:04.968746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.266 [2024-07-20 16:01:04.968797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:30.266 [2024-07-20 16:01:04.968827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:30.266 [2024-07-20 16:01:04.968837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.266 [2024-07-20 16:01:04.968892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.266 [2024-07-20 16:01:04.968905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:30.266 [2024-07-20 16:01:04.968916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:30.266 [2024-07-20 16:01:04.968928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.266 [2024-07-20 16:01:04.968960] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:30.266 [2024-07-20 16:01:04.969205] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:30.266 [2024-07-20 16:01:04.969223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.266 [2024-07-20 16:01:04.969236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:30.266 [2024-07-20 16:01:04.969246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:25:30.266 [2024-07-20 16:01:04.969255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.266 [2024-07-20 16:01:04.970707] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:30.266 [2024-07-20 16:01:04.973183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.266 [2024-07-20 16:01:04.973225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:30.266 [2024-07-20 16:01:04.973248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.481 ms 00:25:30.266 [2024-07-20 16:01:04.973258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.266 [2024-07-20 16:01:04.973313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.266 [2024-07-20 16:01:04.973336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:30.266 [2024-07-20 16:01:04.973347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:30.266 [2024-07-20 16:01:04.973367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.266 [2024-07-20 16:01:04.980260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.266 [2024-07-20 16:01:04.980293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:30.266 [2024-07-20 16:01:04.980305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.850 ms 00:25:30.266 [2024-07-20 16:01:04.980315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.266 [2024-07-20 16:01:04.980410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.266 [2024-07-20 16:01:04.980424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:30.266 [2024-07-20 16:01:04.980440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:30.266 [2024-07-20 16:01:04.980455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.266 [2024-07-20 16:01:04.980516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.266 [2024-07-20 16:01:04.980527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:30.266 [2024-07-20 16:01:04.980544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:30.267 [2024-07-20 16:01:04.980552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.267 [2024-07-20 16:01:04.980578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:30.267 [2024-07-20 16:01:04.982215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.267 [2024-07-20 16:01:04.982242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:30.267 [2024-07-20 16:01:04.982254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:25:30.267 [2024-07-20 16:01:04.982271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.267 [2024-07-20 16:01:04.982303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.267 [2024-07-20 16:01:04.982314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:30.267 [2024-07-20 16:01:04.982333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:30.267 [2024-07-20 16:01:04.982342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.267 [2024-07-20 16:01:04.982379] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:30.267 [2024-07-20 16:01:04.982402] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:30.267 [2024-07-20 16:01:04.982441] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:30.267 [2024-07-20 16:01:04.982464] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:30.267 [2024-07-20 16:01:04.982544] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:30.267 [2024-07-20 16:01:04.982559] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:30.267 [2024-07-20 16:01:04.982574] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:30.267 [2024-07-20 16:01:04.982588] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:30.267 [2024-07-20 16:01:04.982605] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:30.267 [2024-07-20 16:01:04.982616] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:30.267 [2024-07-20 16:01:04.982625] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:30.267 [2024-07-20 16:01:04.982635] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:30.267 [2024-07-20 16:01:04.982644] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:30.267 [2024-07-20 16:01:04.982654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.267 [2024-07-20 16:01:04.982664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:30.267 [2024-07-20 16:01:04.982674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:25:30.267 [2024-07-20 16:01:04.982687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.267 [2024-07-20 16:01:04.982753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.267 [2024-07-20 16:01:04.982763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:30.267 [2024-07-20 16:01:04.982773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:30.267 [2024-07-20 16:01:04.982782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.267 [2024-07-20 16:01:04.982869] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:30.267 [2024-07-20 16:01:04.982887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:30.267 [2024-07-20 16:01:04.982905] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.267 [2024-07-20 16:01:04.982915] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.267 [2024-07-20 16:01:04.982928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:30.267 [2024-07-20 16:01:04.982937] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:30.267 [2024-07-20 16:01:04.982946] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:30.267 [2024-07-20 16:01:04.982956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:30.267 [2024-07-20 16:01:04.982965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:30.267 [2024-07-20 16:01:04.982974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.267 [2024-07-20 16:01:04.982984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:30.267 [2024-07-20 16:01:04.982994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:30.267 [2024-07-20 16:01:04.983003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.267 [2024-07-20 16:01:04.983012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:30.267 [2024-07-20 16:01:04.983021] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:30.267 [2024-07-20 16:01:04.983030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.267 [2024-07-20 16:01:04.983041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:30.267 [2024-07-20 16:01:04.983051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:30.267 [2024-07-20 16:01:04.983059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.267 [2024-07-20 16:01:04.983068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:30.267 [2024-07-20 16:01:04.983077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:30.267 [2024-07-20 16:01:04.983086] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.267 [2024-07-20 16:01:04.983095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:30.267 [2024-07-20 16:01:04.983104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:30.267 [2024-07-20 16:01:04.983113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.267 [2024-07-20 16:01:04.983122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:30.267 [2024-07-20 16:01:04.983131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:30.267 [2024-07-20 16:01:04.983140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.267 [2024-07-20 16:01:04.983150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:30.267 [2024-07-20 16:01:04.983159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:30.267 [2024-07-20 16:01:04.983167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.267 [2024-07-20 16:01:04.983176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:30.267 [2024-07-20 16:01:04.983190] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:30.267 [2024-07-20 16:01:04.983199] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.267 [2024-07-20 16:01:04.983208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:30.267 [2024-07-20 16:01:04.983216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:30.267 [2024-07-20 16:01:04.983225] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.267 [2024-07-20 16:01:04.983235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:30.267 [2024-07-20 16:01:04.983244] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:30.267 [2024-07-20 16:01:04.983253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.267 [2024-07-20 16:01:04.983261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:30.267 [2024-07-20 16:01:04.983270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:30.267 [2024-07-20 16:01:04.983279] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.267 [2024-07-20 16:01:04.983288] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:30.267 [2024-07-20 16:01:04.983298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:30.267 [2024-07-20 16:01:04.983308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.267 [2024-07-20 16:01:04.983317] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.267 [2024-07-20 16:01:04.983326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:30.267 [2024-07-20 16:01:04.983338] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:30.267 [2024-07-20 16:01:04.983347] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:30.267 [2024-07-20 16:01:04.983372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:30.267 [2024-07-20 16:01:04.983381] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:30.267 [2024-07-20 16:01:04.983391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:30.267 [2024-07-20 16:01:04.983401] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:30.267 [2024-07-20 16:01:04.983413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.267 [2024-07-20 16:01:04.983425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:30.267 [2024-07-20 16:01:04.983435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:30.267 [2024-07-20 16:01:04.983446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:30.267 [2024-07-20 16:01:04.983456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:30.267 [2024-07-20 16:01:04.983466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:30.267 [2024-07-20 16:01:04.983476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:30.267 [2024-07-20 16:01:04.983485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:30.267 [2024-07-20 16:01:04.983498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:30.267 [2024-07-20 16:01:04.983508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:30.267 [2024-07-20 16:01:04.983521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:30.267 [2024-07-20 16:01:04.983531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:30.267 [2024-07-20 16:01:04.983541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:30.267 [2024-07-20 16:01:04.983551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:30.267 [2024-07-20 16:01:04.983561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:30.267 [2024-07-20 16:01:04.983570] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:30.267 [2024-07-20 16:01:04.983581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.267 [2024-07-20 16:01:04.983591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:30.267 [2024-07-20 16:01:04.983601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:30.268 [2024-07-20 16:01:04.983619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:30.268 [2024-07-20 16:01:04.983630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:30.268 [2024-07-20 16:01:04.983640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.268 [2024-07-20 16:01:04.983651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:30.268 [2024-07-20 16:01:04.983660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:25:30.268 [2024-07-20 16:01:04.983680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.268 [2024-07-20 16:01:05.007101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.268 [2024-07-20 16:01:05.007201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:30.268 [2024-07-20 16:01:05.007247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.406 ms 00:25:30.268 [2024-07-20 16:01:05.007282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.268 [2024-07-20 16:01:05.007564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.268 [2024-07-20 16:01:05.007603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:30.268 [2024-07-20 16:01:05.007638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:25:30.268 [2024-07-20 16:01:05.007682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.268 [2024-07-20 16:01:05.024852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.268 [2024-07-20 16:01:05.024928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:30.268 [2024-07-20 16:01:05.024956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.027 ms 00:25:30.268 [2024-07-20 16:01:05.024978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.268 [2024-07-20 16:01:05.025040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.268 [2024-07-20 16:01:05.025062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:30.268 [2024-07-20 16:01:05.025083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:30.268 [2024-07-20 16:01:05.025112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.268 [2024-07-20 16:01:05.025755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.268 [2024-07-20 16:01:05.025791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:30.268 [2024-07-20 16:01:05.025813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:25:30.268 [2024-07-20 16:01:05.025833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.268 [2024-07-20 16:01:05.026053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.268 [2024-07-20 16:01:05.026085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:30.268 [2024-07-20 16:01:05.026107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:25:30.268 [2024-07-20 16:01:05.026158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.268 [2024-07-20 16:01:05.033435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.268 [2024-07-20 16:01:05.033475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:30.268 [2024-07-20 16:01:05.033493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.243 ms 00:25:30.268 [2024-07-20 16:01:05.033507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.268 [2024-07-20 16:01:05.037143] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:30.268 [2024-07-20 16:01:05.037208] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:30.268 [2024-07-20 16:01:05.037236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.268 [2024-07-20 16:01:05.037257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:30.268 [2024-07-20 16:01:05.037278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.619 ms 00:25:30.268 [2024-07-20 16:01:05.037298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.268 [2024-07-20 16:01:05.050555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.268 [2024-07-20 16:01:05.050600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:30.268 [2024-07-20 16:01:05.050614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.189 ms 00:25:30.268 [2024-07-20 16:01:05.050624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.268 [2024-07-20 16:01:05.052390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.268 [2024-07-20 16:01:05.052423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:30.268 [2024-07-20 16:01:05.052434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.703 ms 00:25:30.268 [2024-07-20 16:01:05.052444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.268 [2024-07-20 16:01:05.053883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.268 [2024-07-20 16:01:05.053915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:30.268 [2024-07-20 16:01:05.053927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.406 ms 00:25:30.268 [2024-07-20 16:01:05.053936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.268 [2024-07-20 16:01:05.054219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.268 [2024-07-20 16:01:05.054235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:30.268 [2024-07-20 16:01:05.054246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:25:30.268 [2024-07-20 16:01:05.054256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.527 [2024-07-20 16:01:05.074498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.527 [2024-07-20 16:01:05.074565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:30.527 [2024-07-20 16:01:05.074582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.250 ms 00:25:30.527 [2024-07-20 16:01:05.074593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.527 [2024-07-20 16:01:05.080622] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:30.527 [2024-07-20 16:01:05.082852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.527 [2024-07-20 16:01:05.082883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:30.527 [2024-07-20 16:01:05.082903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.226 ms 00:25:30.527 [2024-07-20 16:01:05.082914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.527 [2024-07-20 16:01:05.082968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.527 [2024-07-20 16:01:05.082980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:30.527 [2024-07-20 16:01:05.082991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:30.527 [2024-07-20 16:01:05.083000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.527 [2024-07-20 16:01:05.083078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.527 [2024-07-20 16:01:05.083090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:30.527 [2024-07-20 16:01:05.083104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:30.527 [2024-07-20 16:01:05.083117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.527 [2024-07-20 16:01:05.083137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.527 [2024-07-20 16:01:05.083147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:30.527 [2024-07-20 16:01:05.083157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:30.527 [2024-07-20 16:01:05.083166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.527 [2024-07-20 16:01:05.083199] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:30.527 [2024-07-20 16:01:05.083211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.527 [2024-07-20 16:01:05.083221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:30.527 [2024-07-20 16:01:05.083231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:30.527 [2024-07-20 16:01:05.083258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.527 [2024-07-20 16:01:05.086800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.527 [2024-07-20 16:01:05.086834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:30.527 [2024-07-20 16:01:05.086846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.529 ms 00:25:30.527 [2024-07-20 16:01:05.086856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.527 [2024-07-20 16:01:05.086918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.527 [2024-07-20 16:01:05.086929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:30.527 [2024-07-20 16:01:05.086950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:30.527 [2024-07-20 16:01:05.086960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.527 [2024-07-20 16:01:05.088012] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 119.070 ms, result 0 00:26:10.203  Copying: 25/1024 [MB] (25 MBps) Copying: 51/1024 [MB] (25 MBps) Copying: 76/1024 [MB] (25 MBps) Copying: 102/1024 [MB] (26 MBps) Copying: 127/1024 [MB] (25 MBps) Copying: 154/1024 [MB] (26 MBps) Copying: 180/1024 [MB] (25 MBps) Copying: 203/1024 [MB] (23 MBps) Copying: 228/1024 [MB] (24 MBps) Copying: 253/1024 [MB] (24 MBps) Copying: 277/1024 [MB] (24 MBps) Copying: 301/1024 [MB] (23 MBps) Copying: 326/1024 [MB] (24 MBps) Copying: 351/1024 [MB] (24 MBps) Copying: 375/1024 [MB] (24 MBps) Copying: 399/1024 [MB] (24 MBps) Copying: 426/1024 [MB] (26 MBps) Copying: 452/1024 [MB] (25 MBps) Copying: 478/1024 [MB] (26 MBps) Copying: 504/1024 [MB] (26 MBps) Copying: 530/1024 [MB] (25 MBps) Copying: 555/1024 [MB] (24 MBps) Copying: 581/1024 [MB] (26 MBps) Copying: 608/1024 [MB] (26 MBps) Copying: 633/1024 [MB] (25 MBps) Copying: 660/1024 [MB] (26 MBps) Copying: 687/1024 [MB] (26 MBps) Copying: 713/1024 [MB] (26 MBps) Copying: 739/1024 [MB] (26 MBps) Copying: 765/1024 [MB] (26 MBps) Copying: 792/1024 [MB] (26 MBps) Copying: 818/1024 [MB] (26 MBps) Copying: 844/1024 [MB] (25 MBps) Copying: 870/1024 [MB] (25 MBps) Copying: 895/1024 [MB] (25 MBps) Copying: 921/1024 [MB] (26 MBps) Copying: 948/1024 [MB] (26 MBps) Copying: 974/1024 [MB] (25 MBps) Copying: 1000/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-20 16:01:44.967283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.203 [2024-07-20 16:01:44.967334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:10.203 [2024-07-20 16:01:44.967349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:10.203 [2024-07-20 16:01:44.967387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.203 [2024-07-20 16:01:44.967409] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:10.203 [2024-07-20 16:01:44.968075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.203 [2024-07-20 16:01:44.968094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:10.203 [2024-07-20 16:01:44.968104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:26:10.203 [2024-07-20 16:01:44.968121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.203 [2024-07-20 16:01:44.969686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.203 [2024-07-20 16:01:44.969726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:10.203 [2024-07-20 16:01:44.969738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:26:10.203 [2024-07-20 16:01:44.969748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.203 [2024-07-20 16:01:44.969777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.203 [2024-07-20 16:01:44.969787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:10.203 [2024-07-20 16:01:44.969797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:10.203 [2024-07-20 16:01:44.969806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.203 [2024-07-20 16:01:44.969851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.203 [2024-07-20 16:01:44.969862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:10.203 [2024-07-20 16:01:44.969871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:10.203 [2024-07-20 16:01:44.969881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.203 [2024-07-20 16:01:44.969902] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:10.203 [2024-07-20 16:01:44.969925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:10.203 [2024-07-20 16:01:44.969937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:10.203 [2024-07-20 16:01:44.969948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:10.203 [2024-07-20 16:01:44.969958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:10.203 [2024-07-20 16:01:44.969968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:10.203 [2024-07-20 16:01:44.969979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.969989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.969999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.970990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:10.204 [2024-07-20 16:01:44.971754] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:10.204 [2024-07-20 16:01:44.971764] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68669144-5c5a-4b7c-bb83-844974b2e831 00:26:10.204 [2024-07-20 16:01:44.971775] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:10.204 [2024-07-20 16:01:44.971784] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:26:10.204 [2024-07-20 16:01:44.971793] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:10.204 [2024-07-20 16:01:44.971803] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:10.204 [2024-07-20 16:01:44.971812] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:10.204 [2024-07-20 16:01:44.971821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:10.204 [2024-07-20 16:01:44.971830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:10.204 [2024-07-20 16:01:44.971840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:10.204 [2024-07-20 16:01:44.971848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:10.204 [2024-07-20 16:01:44.971859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.204 [2024-07-20 16:01:44.971878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:10.204 [2024-07-20 16:01:44.971898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.961 ms 00:26:10.204 [2024-07-20 16:01:44.971908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.204 [2024-07-20 16:01:44.973576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.204 [2024-07-20 16:01:44.973596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:10.204 [2024-07-20 16:01:44.973607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:26:10.204 [2024-07-20 16:01:44.973616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.204 [2024-07-20 16:01:44.973724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.204 [2024-07-20 16:01:44.973738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:10.204 [2024-07-20 16:01:44.973748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:26:10.204 [2024-07-20 16:01:44.973758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.204 [2024-07-20 16:01:44.979839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.204 [2024-07-20 16:01:44.979947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:10.204 [2024-07-20 16:01:44.980028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.204 [2024-07-20 16:01:44.980061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.204 [2024-07-20 16:01:44.980128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.204 [2024-07-20 16:01:44.980174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:10.204 [2024-07-20 16:01:44.980203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.204 [2024-07-20 16:01:44.980230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.204 [2024-07-20 16:01:44.980300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.204 [2024-07-20 16:01:44.980426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:10.204 [2024-07-20 16:01:44.980502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.204 [2024-07-20 16:01:44.980543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.204 [2024-07-20 16:01:44.980580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.204 [2024-07-20 16:01:44.980610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:10.204 [2024-07-20 16:01:44.980643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.204 [2024-07-20 16:01:44.980670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.204 [2024-07-20 16:01:44.991874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.204 [2024-07-20 16:01:44.992044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:10.204 [2024-07-20 16:01:44.992183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.204 [2024-07-20 16:01:44.992199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.464 [2024-07-20 16:01:45.000491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.464 [2024-07-20 16:01:45.000531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:10.464 [2024-07-20 16:01:45.000545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.464 [2024-07-20 16:01:45.000555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.464 [2024-07-20 16:01:45.000621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.464 [2024-07-20 16:01:45.000633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:10.464 [2024-07-20 16:01:45.000643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.464 [2024-07-20 16:01:45.000652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.464 [2024-07-20 16:01:45.000678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.464 [2024-07-20 16:01:45.000688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:10.464 [2024-07-20 16:01:45.000698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.464 [2024-07-20 16:01:45.000711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.464 [2024-07-20 16:01:45.000775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.464 [2024-07-20 16:01:45.000788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:10.464 [2024-07-20 16:01:45.000798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.464 [2024-07-20 16:01:45.000807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.464 [2024-07-20 16:01:45.000837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.464 [2024-07-20 16:01:45.000848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:10.465 [2024-07-20 16:01:45.000858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.465 [2024-07-20 16:01:45.000868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.465 [2024-07-20 16:01:45.000908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.465 [2024-07-20 16:01:45.000918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:10.465 [2024-07-20 16:01:45.000928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.465 [2024-07-20 16:01:45.000938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.465 [2024-07-20 16:01:45.000978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:10.465 [2024-07-20 16:01:45.000990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:10.465 [2024-07-20 16:01:45.001000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:10.465 [2024-07-20 16:01:45.001013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.465 [2024-07-20 16:01:45.001129] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 33.866 ms, result 0 00:26:11.032 00:26:11.032 00:26:11.032 16:01:45 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:26:11.032 [2024-07-20 16:01:45.721320] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:11.032 [2024-07-20 16:01:45.721457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95042 ] 00:26:11.298 [2024-07-20 16:01:45.870932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.298 [2024-07-20 16:01:45.922709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.298 [2024-07-20 16:01:46.025710] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:11.298 [2024-07-20 16:01:46.025781] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:11.568 [2024-07-20 16:01:46.176422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.568 [2024-07-20 16:01:46.176472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:11.568 [2024-07-20 16:01:46.176487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:11.568 [2024-07-20 16:01:46.176513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.568 [2024-07-20 16:01:46.176567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.568 [2024-07-20 16:01:46.176580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:11.568 [2024-07-20 16:01:46.176597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:11.568 [2024-07-20 16:01:46.176609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.568 [2024-07-20 16:01:46.176629] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:11.568 [2024-07-20 16:01:46.176818] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:11.568 [2024-07-20 16:01:46.176835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.568 [2024-07-20 16:01:46.176848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:11.568 [2024-07-20 16:01:46.176858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:26:11.568 [2024-07-20 16:01:46.176868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.568 [2024-07-20 16:01:46.177149] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:26:11.568 [2024-07-20 16:01:46.177169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.568 [2024-07-20 16:01:46.177188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:11.568 [2024-07-20 16:01:46.177199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:11.568 [2024-07-20 16:01:46.177211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.568 [2024-07-20 16:01:46.177290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.568 [2024-07-20 16:01:46.177302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:11.568 [2024-07-20 16:01:46.177313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:11.568 [2024-07-20 16:01:46.177328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.568 [2024-07-20 16:01:46.177743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.568 [2024-07-20 16:01:46.177758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:11.568 [2024-07-20 16:01:46.177769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:26:11.568 [2024-07-20 16:01:46.177782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.568 [2024-07-20 16:01:46.177867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.568 [2024-07-20 16:01:46.177880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:11.568 [2024-07-20 16:01:46.177890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:11.568 [2024-07-20 16:01:46.177907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.568 [2024-07-20 16:01:46.177935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.568 [2024-07-20 16:01:46.177945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:11.568 [2024-07-20 16:01:46.177956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:11.568 [2024-07-20 16:01:46.177969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.568 [2024-07-20 16:01:46.177993] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:11.568 [2024-07-20 16:01:46.179776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.569 [2024-07-20 16:01:46.179797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:11.569 [2024-07-20 16:01:46.179808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:26:11.569 [2024-07-20 16:01:46.179817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.569 [2024-07-20 16:01:46.179856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.569 [2024-07-20 16:01:46.179869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:11.569 [2024-07-20 16:01:46.179879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:11.569 [2024-07-20 16:01:46.179888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.569 [2024-07-20 16:01:46.179910] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:11.569 [2024-07-20 16:01:46.179939] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:11.569 [2024-07-20 16:01:46.179977] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:11.569 [2024-07-20 16:01:46.179999] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:11.569 [2024-07-20 16:01:46.180080] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:11.569 [2024-07-20 16:01:46.180104] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:11.569 [2024-07-20 16:01:46.180123] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:11.569 [2024-07-20 16:01:46.180138] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:11.569 [2024-07-20 16:01:46.180149] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:11.569 [2024-07-20 16:01:46.180160] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:11.569 [2024-07-20 16:01:46.180169] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:11.569 [2024-07-20 16:01:46.180178] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:11.569 [2024-07-20 16:01:46.180187] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:11.569 [2024-07-20 16:01:46.180199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.569 [2024-07-20 16:01:46.180209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:11.569 [2024-07-20 16:01:46.180225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:26:11.569 [2024-07-20 16:01:46.180235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.569 [2024-07-20 16:01:46.180298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.569 [2024-07-20 16:01:46.180314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:11.569 [2024-07-20 16:01:46.180330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:11.569 [2024-07-20 16:01:46.180339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.569 [2024-07-20 16:01:46.180452] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:11.569 [2024-07-20 16:01:46.180466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:11.569 [2024-07-20 16:01:46.180484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:11.569 [2024-07-20 16:01:46.180494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.569 [2024-07-20 16:01:46.180503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:11.569 [2024-07-20 16:01:46.180512] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:11.569 [2024-07-20 16:01:46.180521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:11.569 [2024-07-20 16:01:46.180533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:11.569 [2024-07-20 16:01:46.180545] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:11.569 [2024-07-20 16:01:46.180554] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:11.569 [2024-07-20 16:01:46.180563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:11.569 [2024-07-20 16:01:46.180572] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:11.569 [2024-07-20 16:01:46.180581] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:11.569 [2024-07-20 16:01:46.180590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:11.569 [2024-07-20 16:01:46.180599] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:11.569 [2024-07-20 16:01:46.180608] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.569 [2024-07-20 16:01:46.180617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:11.569 [2024-07-20 16:01:46.180626] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:11.569 [2024-07-20 16:01:46.180634] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.569 [2024-07-20 16:01:46.180643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:11.569 [2024-07-20 16:01:46.180652] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:11.569 [2024-07-20 16:01:46.180661] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:11.569 [2024-07-20 16:01:46.180669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:11.569 [2024-07-20 16:01:46.180678] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:11.569 [2024-07-20 16:01:46.180690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:11.569 [2024-07-20 16:01:46.180698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:11.569 [2024-07-20 16:01:46.180707] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:11.569 [2024-07-20 16:01:46.180716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:11.569 [2024-07-20 16:01:46.180724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:11.569 [2024-07-20 16:01:46.180733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:11.569 [2024-07-20 16:01:46.180741] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:11.569 [2024-07-20 16:01:46.180750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:11.569 [2024-07-20 16:01:46.180758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:11.569 [2024-07-20 16:01:46.180767] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:11.569 [2024-07-20 16:01:46.180775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:11.569 [2024-07-20 16:01:46.180784] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:11.569 [2024-07-20 16:01:46.180793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:11.569 [2024-07-20 16:01:46.180801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:11.569 [2024-07-20 16:01:46.180810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:11.569 [2024-07-20 16:01:46.180819] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.569 [2024-07-20 16:01:46.180833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:11.569 [2024-07-20 16:01:46.180842] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:11.569 [2024-07-20 16:01:46.180851] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.569 [2024-07-20 16:01:46.180859] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:11.569 [2024-07-20 16:01:46.180869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:11.569 [2024-07-20 16:01:46.180881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:11.569 [2024-07-20 16:01:46.180890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:11.569 [2024-07-20 16:01:46.180900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:11.569 [2024-07-20 16:01:46.180909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:11.569 [2024-07-20 16:01:46.180917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:11.569 [2024-07-20 16:01:46.180926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:11.569 [2024-07-20 16:01:46.180935] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:11.569 [2024-07-20 16:01:46.180944] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:11.569 [2024-07-20 16:01:46.180954] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:11.569 [2024-07-20 16:01:46.180965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:11.569 [2024-07-20 16:01:46.180976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:11.569 [2024-07-20 16:01:46.180988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:11.569 [2024-07-20 16:01:46.180998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:11.569 [2024-07-20 16:01:46.181008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:11.569 [2024-07-20 16:01:46.181019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:11.569 [2024-07-20 16:01:46.181029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:11.569 [2024-07-20 16:01:46.181039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:11.569 [2024-07-20 16:01:46.181049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:11.569 [2024-07-20 16:01:46.181058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:11.569 [2024-07-20 16:01:46.181068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:11.569 [2024-07-20 16:01:46.181077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:11.569 [2024-07-20 16:01:46.181087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:11.569 [2024-07-20 16:01:46.181096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:11.569 [2024-07-20 16:01:46.181107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:11.570 [2024-07-20 16:01:46.181116] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:11.570 [2024-07-20 16:01:46.181126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:11.570 [2024-07-20 16:01:46.181145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:11.570 [2024-07-20 16:01:46.181157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:11.570 [2024-07-20 16:01:46.181168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:11.570 [2024-07-20 16:01:46.181177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:11.570 [2024-07-20 16:01:46.181195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.181212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:11.570 [2024-07-20 16:01:46.181221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:26:11.570 [2024-07-20 16:01:46.181231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.201455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.201547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:11.570 [2024-07-20 16:01:46.201590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.205 ms 00:26:11.570 [2024-07-20 16:01:46.201623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.201854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.201888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:11.570 [2024-07-20 16:01:46.201922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:26:11.570 [2024-07-20 16:01:46.201952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.220011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.220062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:11.570 [2024-07-20 16:01:46.220083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.942 ms 00:26:11.570 [2024-07-20 16:01:46.220100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.220150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.220167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:11.570 [2024-07-20 16:01:46.220191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:11.570 [2024-07-20 16:01:46.220220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.220389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.220410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:11.570 [2024-07-20 16:01:46.220428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:11.570 [2024-07-20 16:01:46.220444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.220620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.220642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:11.570 [2024-07-20 16:01:46.220659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:26:11.570 [2024-07-20 16:01:46.220675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.228151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.228203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:11.570 [2024-07-20 16:01:46.228223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.456 ms 00:26:11.570 [2024-07-20 16:01:46.228240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.228427] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:11.570 [2024-07-20 16:01:46.228454] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:11.570 [2024-07-20 16:01:46.228474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.228503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:11.570 [2024-07-20 16:01:46.228526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:26:11.570 [2024-07-20 16:01:46.228542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.240805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.240845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:11.570 [2024-07-20 16:01:46.240866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.250 ms 00:26:11.570 [2024-07-20 16:01:46.240882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.240980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.240991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:11.570 [2024-07-20 16:01:46.241002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:26:11.570 [2024-07-20 16:01:46.241011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.241060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.241080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:11.570 [2024-07-20 16:01:46.241091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:11.570 [2024-07-20 16:01:46.241101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.241346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.241388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:11.570 [2024-07-20 16:01:46.241416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:26:11.570 [2024-07-20 16:01:46.241426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.241444] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:26:11.570 [2024-07-20 16:01:46.241457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.241479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:11.570 [2024-07-20 16:01:46.241489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:11.570 [2024-07-20 16:01:46.241498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.248923] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:11.570 [2024-07-20 16:01:46.249121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.249134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:11.570 [2024-07-20 16:01:46.249144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.614 ms 00:26:11.570 [2024-07-20 16:01:46.249157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.251384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.251418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:11.570 [2024-07-20 16:01:46.251429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:26:11.570 [2024-07-20 16:01:46.251439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.251516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.251529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:11.570 [2024-07-20 16:01:46.251540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:11.570 [2024-07-20 16:01:46.251553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.251593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.251603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:11.570 [2024-07-20 16:01:46.251613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:11.570 [2024-07-20 16:01:46.251629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.251662] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:11.570 [2024-07-20 16:01:46.251674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.251683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:11.570 [2024-07-20 16:01:46.251693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:11.570 [2024-07-20 16:01:46.251702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.255696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.255730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:11.570 [2024-07-20 16:01:46.255742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.980 ms 00:26:11.570 [2024-07-20 16:01:46.255752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.255810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.570 [2024-07-20 16:01:46.255822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:11.570 [2024-07-20 16:01:46.255832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:11.570 [2024-07-20 16:01:46.255842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.570 [2024-07-20 16:01:46.256869] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 80.177 ms, result 0 00:26:48.929  Copying: 26/1024 [MB] (26 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 81/1024 [MB] (27 MBps) Copying: 108/1024 [MB] (27 MBps) Copying: 137/1024 [MB] (28 MBps) Copying: 164/1024 [MB] (27 MBps) Copying: 191/1024 [MB] (27 MBps) Copying: 218/1024 [MB] (26 MBps) Copying: 246/1024 [MB] (27 MBps) Copying: 273/1024 [MB] (27 MBps) Copying: 300/1024 [MB] (27 MBps) Copying: 327/1024 [MB] (27 MBps) Copying: 354/1024 [MB] (27 MBps) Copying: 382/1024 [MB] (28 MBps) Copying: 410/1024 [MB] (27 MBps) Copying: 438/1024 [MB] (28 MBps) Copying: 466/1024 [MB] (28 MBps) Copying: 494/1024 [MB] (28 MBps) Copying: 523/1024 [MB] (28 MBps) Copying: 549/1024 [MB] (26 MBps) Copying: 576/1024 [MB] (27 MBps) Copying: 604/1024 [MB] (27 MBps) Copying: 632/1024 [MB] (28 MBps) Copying: 660/1024 [MB] (28 MBps) Copying: 688/1024 [MB] (27 MBps) Copying: 716/1024 [MB] (28 MBps) Copying: 744/1024 [MB] (27 MBps) Copying: 772/1024 [MB] (28 MBps) Copying: 800/1024 [MB] (27 MBps) Copying: 828/1024 [MB] (27 MBps) Copying: 855/1024 [MB] (27 MBps) Copying: 882/1024 [MB] (27 MBps) Copying: 909/1024 [MB] (27 MBps) Copying: 936/1024 [MB] (27 MBps) Copying: 964/1024 [MB] (27 MBps) Copying: 994/1024 [MB] (29 MBps) Copying: 1021/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-20 16:02:23.530855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.930 [2024-07-20 16:02:23.530926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:48.930 [2024-07-20 16:02:23.530947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:48.930 [2024-07-20 16:02:23.530959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.930 [2024-07-20 16:02:23.530992] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:48.930 [2024-07-20 16:02:23.531721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.930 [2024-07-20 16:02:23.531737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:48.930 [2024-07-20 16:02:23.531750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:26:48.930 [2024-07-20 16:02:23.531770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.930 [2024-07-20 16:02:23.531986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.930 [2024-07-20 16:02:23.532004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:48.930 [2024-07-20 16:02:23.532017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:26:48.930 [2024-07-20 16:02:23.532028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.930 [2024-07-20 16:02:23.532063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.930 [2024-07-20 16:02:23.532075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:48.930 [2024-07-20 16:02:23.532088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:48.930 [2024-07-20 16:02:23.532098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.930 [2024-07-20 16:02:23.532149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.930 [2024-07-20 16:02:23.532161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:48.930 [2024-07-20 16:02:23.532173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:48.930 [2024-07-20 16:02:23.532184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.930 [2024-07-20 16:02:23.532202] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:48.930 [2024-07-20 16:02:23.532218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.532989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:48.930 [2024-07-20 16:02:23.533197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:48.931 [2024-07-20 16:02:23.533485] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:48.931 [2024-07-20 16:02:23.533509] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68669144-5c5a-4b7c-bb83-844974b2e831 00:26:48.931 [2024-07-20 16:02:23.533530] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:48.931 [2024-07-20 16:02:23.533542] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:26:48.931 [2024-07-20 16:02:23.533553] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:48.931 [2024-07-20 16:02:23.533564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:48.931 [2024-07-20 16:02:23.533575] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:48.931 [2024-07-20 16:02:23.533587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:48.931 [2024-07-20 16:02:23.533599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:48.931 [2024-07-20 16:02:23.533609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:48.931 [2024-07-20 16:02:23.533620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:48.931 [2024-07-20 16:02:23.533631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.931 [2024-07-20 16:02:23.533643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:48.931 [2024-07-20 16:02:23.533655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:26:48.931 [2024-07-20 16:02:23.533678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.535566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.931 [2024-07-20 16:02:23.535592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:48.931 [2024-07-20 16:02:23.535605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.872 ms 00:26:48.931 [2024-07-20 16:02:23.535617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.535729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.931 [2024-07-20 16:02:23.535742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:48.931 [2024-07-20 16:02:23.535759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:26:48.931 [2024-07-20 16:02:23.535770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.542537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.931 [2024-07-20 16:02:23.542567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:48.931 [2024-07-20 16:02:23.542588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.931 [2024-07-20 16:02:23.542599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.542654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.931 [2024-07-20 16:02:23.542665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:48.931 [2024-07-20 16:02:23.542680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.931 [2024-07-20 16:02:23.542697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.542757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.931 [2024-07-20 16:02:23.542771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:48.931 [2024-07-20 16:02:23.542783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.931 [2024-07-20 16:02:23.542793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.542810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.931 [2024-07-20 16:02:23.542820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:48.931 [2024-07-20 16:02:23.542831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.931 [2024-07-20 16:02:23.542846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.556408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.931 [2024-07-20 16:02:23.556464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:48.931 [2024-07-20 16:02:23.556485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.931 [2024-07-20 16:02:23.556495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.566828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.931 [2024-07-20 16:02:23.566874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:48.931 [2024-07-20 16:02:23.566894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.931 [2024-07-20 16:02:23.566905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.566963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.931 [2024-07-20 16:02:23.566974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:48.931 [2024-07-20 16:02:23.566985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.931 [2024-07-20 16:02:23.566994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.567032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.931 [2024-07-20 16:02:23.567042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:48.931 [2024-07-20 16:02:23.567053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.931 [2024-07-20 16:02:23.567071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.567134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.931 [2024-07-20 16:02:23.567146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:48.931 [2024-07-20 16:02:23.567157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.931 [2024-07-20 16:02:23.567167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.567204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.931 [2024-07-20 16:02:23.567216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:48.931 [2024-07-20 16:02:23.567226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.931 [2024-07-20 16:02:23.567242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.567281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.931 [2024-07-20 16:02:23.567292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:48.931 [2024-07-20 16:02:23.567302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.931 [2024-07-20 16:02:23.567312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.567354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.931 [2024-07-20 16:02:23.567382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:48.931 [2024-07-20 16:02:23.567392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.931 [2024-07-20 16:02:23.567402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.931 [2024-07-20 16:02:23.567546] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 36.705 ms, result 0 00:26:49.189 00:26:49.189 00:26:49.189 16:02:23 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:51.167 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:51.167 16:02:25 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:26:51.167 [2024-07-20 16:02:25.546873] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:51.167 [2024-07-20 16:02:25.546984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95446 ] 00:26:51.167 [2024-07-20 16:02:25.698416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.167 [2024-07-20 16:02:25.738844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.167 [2024-07-20 16:02:25.839715] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:51.167 [2024-07-20 16:02:25.839789] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:51.427 [2024-07-20 16:02:25.989927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.427 [2024-07-20 16:02:25.989978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:51.427 [2024-07-20 16:02:25.989993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:51.427 [2024-07-20 16:02:25.990003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.427 [2024-07-20 16:02:25.990048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.427 [2024-07-20 16:02:25.990061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:51.427 [2024-07-20 16:02:25.990080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:51.427 [2024-07-20 16:02:25.990093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.427 [2024-07-20 16:02:25.990122] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:51.427 [2024-07-20 16:02:25.990326] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:51.427 [2024-07-20 16:02:25.990345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.427 [2024-07-20 16:02:25.990375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:51.427 [2024-07-20 16:02:25.990386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:26:51.427 [2024-07-20 16:02:25.990395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.427 [2024-07-20 16:02:25.990698] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:26:51.427 [2024-07-20 16:02:25.990718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.427 [2024-07-20 16:02:25.990741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:51.427 [2024-07-20 16:02:25.990752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:51.427 [2024-07-20 16:02:25.990771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.427 [2024-07-20 16:02:25.990816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.427 [2024-07-20 16:02:25.990832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:51.427 [2024-07-20 16:02:25.990849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:51.427 [2024-07-20 16:02:25.990859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.427 [2024-07-20 16:02:25.991211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.427 [2024-07-20 16:02:25.991230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:51.427 [2024-07-20 16:02:25.991240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:26:51.427 [2024-07-20 16:02:25.991260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.427 [2024-07-20 16:02:25.991342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.427 [2024-07-20 16:02:25.991366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:51.427 [2024-07-20 16:02:25.991377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:51.427 [2024-07-20 16:02:25.991386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.427 [2024-07-20 16:02:25.991412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.427 [2024-07-20 16:02:25.991423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:51.427 [2024-07-20 16:02:25.991433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:51.427 [2024-07-20 16:02:25.991447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.427 [2024-07-20 16:02:25.991471] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:51.427 [2024-07-20 16:02:25.993208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.427 [2024-07-20 16:02:25.993228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:51.427 [2024-07-20 16:02:25.993238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.745 ms 00:26:51.427 [2024-07-20 16:02:25.993256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.428 [2024-07-20 16:02:25.993286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.428 [2024-07-20 16:02:25.993299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:51.428 [2024-07-20 16:02:25.993309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:51.428 [2024-07-20 16:02:25.993319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.428 [2024-07-20 16:02:25.993342] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:51.428 [2024-07-20 16:02:25.993374] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:51.428 [2024-07-20 16:02:25.993419] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:51.428 [2024-07-20 16:02:25.993437] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:51.428 [2024-07-20 16:02:25.993517] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:51.428 [2024-07-20 16:02:25.993547] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:51.428 [2024-07-20 16:02:25.993563] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:51.428 [2024-07-20 16:02:25.993579] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:51.428 [2024-07-20 16:02:25.993591] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:51.428 [2024-07-20 16:02:25.993602] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:51.428 [2024-07-20 16:02:25.993611] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:51.428 [2024-07-20 16:02:25.993621] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:51.428 [2024-07-20 16:02:25.993630] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:51.428 [2024-07-20 16:02:25.993643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.428 [2024-07-20 16:02:25.993653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:51.428 [2024-07-20 16:02:25.993664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:26:51.428 [2024-07-20 16:02:25.993680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.428 [2024-07-20 16:02:25.993749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.428 [2024-07-20 16:02:25.993762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:51.428 [2024-07-20 16:02:25.993772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:51.428 [2024-07-20 16:02:25.993781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.428 [2024-07-20 16:02:25.993875] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:51.428 [2024-07-20 16:02:25.993888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:51.428 [2024-07-20 16:02:25.993899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:51.428 [2024-07-20 16:02:25.993915] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.428 [2024-07-20 16:02:25.993925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:51.428 [2024-07-20 16:02:25.993935] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:51.428 [2024-07-20 16:02:25.993944] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:51.428 [2024-07-20 16:02:25.993954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:51.428 [2024-07-20 16:02:25.993966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:51.428 [2024-07-20 16:02:25.993975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:51.428 [2024-07-20 16:02:25.993985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:51.428 [2024-07-20 16:02:25.993994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:51.428 [2024-07-20 16:02:25.994003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:51.428 [2024-07-20 16:02:25.994012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:51.428 [2024-07-20 16:02:25.994021] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:51.428 [2024-07-20 16:02:25.994030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.428 [2024-07-20 16:02:25.994039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:51.428 [2024-07-20 16:02:25.994048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:51.428 [2024-07-20 16:02:25.994057] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.428 [2024-07-20 16:02:25.994066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:51.428 [2024-07-20 16:02:25.994076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:51.428 [2024-07-20 16:02:25.994085] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.428 [2024-07-20 16:02:25.994094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:51.428 [2024-07-20 16:02:25.994103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:51.428 [2024-07-20 16:02:25.994125] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.428 [2024-07-20 16:02:25.994134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:51.428 [2024-07-20 16:02:25.994143] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:51.428 [2024-07-20 16:02:25.994151] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.428 [2024-07-20 16:02:25.994160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:51.428 [2024-07-20 16:02:25.994169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:51.428 [2024-07-20 16:02:25.994178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.428 [2024-07-20 16:02:25.994186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:51.428 [2024-07-20 16:02:25.994196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:51.428 [2024-07-20 16:02:25.994205] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:51.428 [2024-07-20 16:02:25.994214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:51.428 [2024-07-20 16:02:25.994224] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:51.428 [2024-07-20 16:02:25.994233] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:51.428 [2024-07-20 16:02:25.994241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:51.428 [2024-07-20 16:02:25.994250] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:51.428 [2024-07-20 16:02:25.994259] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.428 [2024-07-20 16:02:25.994273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:51.428 [2024-07-20 16:02:25.994282] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:51.428 [2024-07-20 16:02:25.994291] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.428 [2024-07-20 16:02:25.994300] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:51.428 [2024-07-20 16:02:25.994309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:51.428 [2024-07-20 16:02:25.994322] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:51.428 [2024-07-20 16:02:25.994338] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.428 [2024-07-20 16:02:25.994347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:51.428 [2024-07-20 16:02:25.994367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:51.428 [2024-07-20 16:02:25.994377] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:51.428 [2024-07-20 16:02:25.994386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:51.428 [2024-07-20 16:02:25.994395] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:51.428 [2024-07-20 16:02:25.994404] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:51.428 [2024-07-20 16:02:25.994415] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:51.428 [2024-07-20 16:02:25.994426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:51.428 [2024-07-20 16:02:25.994437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:51.428 [2024-07-20 16:02:25.994451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:51.428 [2024-07-20 16:02:25.994461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:51.428 [2024-07-20 16:02:25.994487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:51.428 [2024-07-20 16:02:25.994498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:51.428 [2024-07-20 16:02:25.994508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:51.428 [2024-07-20 16:02:25.994518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:51.428 [2024-07-20 16:02:25.994528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:51.428 [2024-07-20 16:02:25.994538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:51.428 [2024-07-20 16:02:25.994550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:51.428 [2024-07-20 16:02:25.994560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:51.428 [2024-07-20 16:02:25.994570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:51.428 [2024-07-20 16:02:25.994580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:51.428 [2024-07-20 16:02:25.994590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:51.428 [2024-07-20 16:02:25.994601] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:51.428 [2024-07-20 16:02:25.994611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:51.428 [2024-07-20 16:02:25.994623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:51.428 [2024-07-20 16:02:25.994636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:51.428 [2024-07-20 16:02:25.994647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:51.428 [2024-07-20 16:02:25.994658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:51.428 [2024-07-20 16:02:25.994676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.428 [2024-07-20 16:02:25.994687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:51.428 [2024-07-20 16:02:25.994697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:26:51.428 [2024-07-20 16:02:25.994706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.015455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.015498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:51.429 [2024-07-20 16:02:26.015515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.723 ms 00:26:51.429 [2024-07-20 16:02:26.015529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.015624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.015638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:51.429 [2024-07-20 16:02:26.015652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:51.429 [2024-07-20 16:02:26.015665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.028601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.028652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:51.429 [2024-07-20 16:02:26.028675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.892 ms 00:26:51.429 [2024-07-20 16:02:26.028692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.028756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.028776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:51.429 [2024-07-20 16:02:26.028800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:51.429 [2024-07-20 16:02:26.028817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.028986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.029000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:51.429 [2024-07-20 16:02:26.029013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:26:51.429 [2024-07-20 16:02:26.029024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.029150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.029165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:51.429 [2024-07-20 16:02:26.029177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:26:51.429 [2024-07-20 16:02:26.029189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.035496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.035536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:51.429 [2024-07-20 16:02:26.035558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.294 ms 00:26:51.429 [2024-07-20 16:02:26.035570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.035699] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:51.429 [2024-07-20 16:02:26.035717] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:51.429 [2024-07-20 16:02:26.035731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.035751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:51.429 [2024-07-20 16:02:26.035768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:51.429 [2024-07-20 16:02:26.035778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.046637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.046667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:51.429 [2024-07-20 16:02:26.046679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.857 ms 00:26:51.429 [2024-07-20 16:02:26.046698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.046802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.046813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:51.429 [2024-07-20 16:02:26.046824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:26:51.429 [2024-07-20 16:02:26.046833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.046884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.046899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:51.429 [2024-07-20 16:02:26.046909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:51.429 [2024-07-20 16:02:26.046919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.047169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.047197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:51.429 [2024-07-20 16:02:26.047208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:26:51.429 [2024-07-20 16:02:26.047218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.047235] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:26:51.429 [2024-07-20 16:02:26.047247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.047261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:51.429 [2024-07-20 16:02:26.047278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:51.429 [2024-07-20 16:02:26.047294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.054506] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:51.429 [2024-07-20 16:02:26.054694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.054708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:51.429 [2024-07-20 16:02:26.054726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.392 ms 00:26:51.429 [2024-07-20 16:02:26.054746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.056770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.056798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:51.429 [2024-07-20 16:02:26.056809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:26:51.429 [2024-07-20 16:02:26.056819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.056891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.056903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:51.429 [2024-07-20 16:02:26.056913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:51.429 [2024-07-20 16:02:26.056926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.056965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.056976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:51.429 [2024-07-20 16:02:26.056986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:51.429 [2024-07-20 16:02:26.056995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.057035] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:51.429 [2024-07-20 16:02:26.057046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.057056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:51.429 [2024-07-20 16:02:26.057065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:51.429 [2024-07-20 16:02:26.057074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.061093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.061127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:51.429 [2024-07-20 16:02:26.061139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.003 ms 00:26:51.429 [2024-07-20 16:02:26.061149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.061218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.429 [2024-07-20 16:02:26.061231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:51.429 [2024-07-20 16:02:26.061242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:51.429 [2024-07-20 16:02:26.061251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.429 [2024-07-20 16:02:26.062472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 72.184 ms, result 0 00:27:31.510  Copying: 25/1024 [MB] (25 MBps) Copying: 50/1024 [MB] (25 MBps) Copying: 76/1024 [MB] (25 MBps) Copying: 102/1024 [MB] (26 MBps) Copying: 129/1024 [MB] (26 MBps) Copying: 154/1024 [MB] (25 MBps) Copying: 180/1024 [MB] (25 MBps) Copying: 206/1024 [MB] (25 MBps) Copying: 232/1024 [MB] (26 MBps) Copying: 259/1024 [MB] (26 MBps) Copying: 284/1024 [MB] (25 MBps) Copying: 312/1024 [MB] (27 MBps) Copying: 338/1024 [MB] (26 MBps) Copying: 365/1024 [MB] (26 MBps) Copying: 391/1024 [MB] (26 MBps) Copying: 416/1024 [MB] (25 MBps) Copying: 442/1024 [MB] (25 MBps) Copying: 468/1024 [MB] (25 MBps) Copying: 494/1024 [MB] (25 MBps) Copying: 519/1024 [MB] (25 MBps) Copying: 545/1024 [MB] (25 MBps) Copying: 571/1024 [MB] (25 MBps) Copying: 597/1024 [MB] (25 MBps) Copying: 622/1024 [MB] (25 MBps) Copying: 648/1024 [MB] (25 MBps) Copying: 674/1024 [MB] (25 MBps) Copying: 700/1024 [MB] (25 MBps) Copying: 726/1024 [MB] (25 MBps) Copying: 751/1024 [MB] (25 MBps) Copying: 776/1024 [MB] (25 MBps) Copying: 802/1024 [MB] (26 MBps) Copying: 829/1024 [MB] (26 MBps) Copying: 855/1024 [MB] (25 MBps) Copying: 880/1024 [MB] (25 MBps) Copying: 907/1024 [MB] (26 MBps) Copying: 932/1024 [MB] (25 MBps) Copying: 959/1024 [MB] (26 MBps) Copying: 987/1024 [MB] (28 MBps) Copying: 1014/1024 [MB] (27 MBps) Copying: 1048352/1048576 [kB] (9328 kBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-20 16:03:06.216680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.510 [2024-07-20 16:03:06.216865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:31.510 [2024-07-20 16:03:06.216950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:31.510 [2024-07-20 16:03:06.216986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.510 [2024-07-20 16:03:06.219369] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:31.510 [2024-07-20 16:03:06.221263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.510 [2024-07-20 16:03:06.221405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:31.510 [2024-07-20 16:03:06.221492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.725 ms 00:27:31.510 [2024-07-20 16:03:06.221528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.510 [2024-07-20 16:03:06.230449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.510 [2024-07-20 16:03:06.230598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:31.510 [2024-07-20 16:03:06.230676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.145 ms 00:27:31.510 [2024-07-20 16:03:06.230693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.510 [2024-07-20 16:03:06.230730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.510 [2024-07-20 16:03:06.230741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:31.510 [2024-07-20 16:03:06.230752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:31.510 [2024-07-20 16:03:06.230762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.510 [2024-07-20 16:03:06.230811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.510 [2024-07-20 16:03:06.230826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:31.510 [2024-07-20 16:03:06.230835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:31.510 [2024-07-20 16:03:06.230859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.510 [2024-07-20 16:03:06.230882] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:31.510 [2024-07-20 16:03:06.230895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:27:31.510 [2024-07-20 16:03:06.230908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.230919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.230930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.230941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.230951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.230962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.230973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.230983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.230994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:31.510 [2024-07-20 16:03:06.231558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.231990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.232001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.232012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.232022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.232033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.232053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.232064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.232075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.232085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.232096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:31.511 [2024-07-20 16:03:06.232112] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:31.511 [2024-07-20 16:03:06.232131] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68669144-5c5a-4b7c-bb83-844974b2e831 00:27:31.511 [2024-07-20 16:03:06.232142] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:27:31.511 [2024-07-20 16:03:06.232152] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129568 00:27:31.511 [2024-07-20 16:03:06.232161] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:27:31.511 [2024-07-20 16:03:06.232170] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:27:31.511 [2024-07-20 16:03:06.232183] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:31.511 [2024-07-20 16:03:06.232193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:31.511 [2024-07-20 16:03:06.232203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:31.511 [2024-07-20 16:03:06.232212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:31.511 [2024-07-20 16:03:06.232220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:31.511 [2024-07-20 16:03:06.232231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.511 [2024-07-20 16:03:06.232248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:31.511 [2024-07-20 16:03:06.232258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.352 ms 00:27:31.511 [2024-07-20 16:03:06.232267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.511 [2024-07-20 16:03:06.234065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.511 [2024-07-20 16:03:06.234094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:31.511 [2024-07-20 16:03:06.234116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.783 ms 00:27:31.511 [2024-07-20 16:03:06.234126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.511 [2024-07-20 16:03:06.234232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.511 [2024-07-20 16:03:06.234244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:31.511 [2024-07-20 16:03:06.234256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:27:31.511 [2024-07-20 16:03:06.234266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.511 [2024-07-20 16:03:06.240243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.511 [2024-07-20 16:03:06.240265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:31.511 [2024-07-20 16:03:06.240275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.511 [2024-07-20 16:03:06.240292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.511 [2024-07-20 16:03:06.240341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.511 [2024-07-20 16:03:06.240351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:31.511 [2024-07-20 16:03:06.240360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.511 [2024-07-20 16:03:06.240397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.511 [2024-07-20 16:03:06.240442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.511 [2024-07-20 16:03:06.240458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:31.511 [2024-07-20 16:03:06.240468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.511 [2024-07-20 16:03:06.240477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.511 [2024-07-20 16:03:06.240518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.511 [2024-07-20 16:03:06.240528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:31.511 [2024-07-20 16:03:06.240538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.511 [2024-07-20 16:03:06.240563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.511 [2024-07-20 16:03:06.251556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.511 [2024-07-20 16:03:06.251597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:31.511 [2024-07-20 16:03:06.251610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.511 [2024-07-20 16:03:06.251619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.511 [2024-07-20 16:03:06.260742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.511 [2024-07-20 16:03:06.260774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:31.511 [2024-07-20 16:03:06.260785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.511 [2024-07-20 16:03:06.260811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.511 [2024-07-20 16:03:06.260860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.511 [2024-07-20 16:03:06.260872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:31.511 [2024-07-20 16:03:06.260888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.511 [2024-07-20 16:03:06.260898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.512 [2024-07-20 16:03:06.260922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.512 [2024-07-20 16:03:06.260933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:31.512 [2024-07-20 16:03:06.260943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.512 [2024-07-20 16:03:06.260960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.512 [2024-07-20 16:03:06.261019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.512 [2024-07-20 16:03:06.261031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:31.512 [2024-07-20 16:03:06.261041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.512 [2024-07-20 16:03:06.261054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.512 [2024-07-20 16:03:06.261082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.512 [2024-07-20 16:03:06.261100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:31.512 [2024-07-20 16:03:06.261110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.512 [2024-07-20 16:03:06.261120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.512 [2024-07-20 16:03:06.261157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.512 [2024-07-20 16:03:06.261168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:31.512 [2024-07-20 16:03:06.261177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.512 [2024-07-20 16:03:06.261187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.512 [2024-07-20 16:03:06.261230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:31.512 [2024-07-20 16:03:06.261242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:31.512 [2024-07-20 16:03:06.261251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:31.512 [2024-07-20 16:03:06.261267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.512 [2024-07-20 16:03:06.261624] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 46.606 ms, result 0 00:27:32.534 00:27:32.534 00:27:32.534 16:03:07 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:32.534 [2024-07-20 16:03:07.286450] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:32.534 [2024-07-20 16:03:07.286743] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95866 ] 00:27:32.795 [2024-07-20 16:03:07.437111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.795 [2024-07-20 16:03:07.477849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.795 [2024-07-20 16:03:07.578854] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:32.795 [2024-07-20 16:03:07.578938] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:33.056 [2024-07-20 16:03:07.729821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.056 [2024-07-20 16:03:07.729867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:33.056 [2024-07-20 16:03:07.729882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:33.056 [2024-07-20 16:03:07.729908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.056 [2024-07-20 16:03:07.729955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.056 [2024-07-20 16:03:07.729967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:33.056 [2024-07-20 16:03:07.729977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:33.056 [2024-07-20 16:03:07.729989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.056 [2024-07-20 16:03:07.730008] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:33.056 [2024-07-20 16:03:07.730220] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:33.056 [2024-07-20 16:03:07.730238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.056 [2024-07-20 16:03:07.730252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:33.056 [2024-07-20 16:03:07.730271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:27:33.056 [2024-07-20 16:03:07.730282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.056 [2024-07-20 16:03:07.730606] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:33.056 [2024-07-20 16:03:07.730629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.056 [2024-07-20 16:03:07.730653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:33.056 [2024-07-20 16:03:07.730672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:33.056 [2024-07-20 16:03:07.730685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.056 [2024-07-20 16:03:07.730759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.056 [2024-07-20 16:03:07.730771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:33.056 [2024-07-20 16:03:07.730788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:33.056 [2024-07-20 16:03:07.730798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.056 [2024-07-20 16:03:07.731161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.056 [2024-07-20 16:03:07.731179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:33.056 [2024-07-20 16:03:07.731189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:27:33.056 [2024-07-20 16:03:07.731202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.056 [2024-07-20 16:03:07.731285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.056 [2024-07-20 16:03:07.731298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:33.056 [2024-07-20 16:03:07.731308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:33.056 [2024-07-20 16:03:07.731320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.056 [2024-07-20 16:03:07.731353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.056 [2024-07-20 16:03:07.731377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:33.056 [2024-07-20 16:03:07.731387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:33.056 [2024-07-20 16:03:07.731399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.056 [2024-07-20 16:03:07.731422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:33.056 [2024-07-20 16:03:07.733173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.056 [2024-07-20 16:03:07.733188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:33.056 [2024-07-20 16:03:07.733207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.757 ms 00:27:33.056 [2024-07-20 16:03:07.733216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.056 [2024-07-20 16:03:07.733252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.056 [2024-07-20 16:03:07.733262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:33.056 [2024-07-20 16:03:07.733271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:33.056 [2024-07-20 16:03:07.733280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.056 [2024-07-20 16:03:07.733315] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:33.056 [2024-07-20 16:03:07.733336] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:33.056 [2024-07-20 16:03:07.733388] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:33.056 [2024-07-20 16:03:07.733406] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:33.056 [2024-07-20 16:03:07.733486] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:33.056 [2024-07-20 16:03:07.733498] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:33.056 [2024-07-20 16:03:07.733532] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:33.056 [2024-07-20 16:03:07.733548] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:33.056 [2024-07-20 16:03:07.733559] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:33.056 [2024-07-20 16:03:07.733570] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:33.056 [2024-07-20 16:03:07.733579] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:33.056 [2024-07-20 16:03:07.733589] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:33.056 [2024-07-20 16:03:07.733597] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:33.056 [2024-07-20 16:03:07.733613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.056 [2024-07-20 16:03:07.733622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:33.056 [2024-07-20 16:03:07.733632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:27:33.056 [2024-07-20 16:03:07.733646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.056 [2024-07-20 16:03:07.733710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.056 [2024-07-20 16:03:07.733723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:33.056 [2024-07-20 16:03:07.733732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:33.056 [2024-07-20 16:03:07.733741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.056 [2024-07-20 16:03:07.733848] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:33.056 [2024-07-20 16:03:07.733860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:33.056 [2024-07-20 16:03:07.733870] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:33.056 [2024-07-20 16:03:07.733881] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.056 [2024-07-20 16:03:07.733890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:33.056 [2024-07-20 16:03:07.733899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:33.056 [2024-07-20 16:03:07.733908] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:33.056 [2024-07-20 16:03:07.733924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:33.056 [2024-07-20 16:03:07.733934] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:33.056 [2024-07-20 16:03:07.733942] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:33.056 [2024-07-20 16:03:07.733951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:33.056 [2024-07-20 16:03:07.733963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:33.056 [2024-07-20 16:03:07.733972] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:33.056 [2024-07-20 16:03:07.733982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:33.056 [2024-07-20 16:03:07.733991] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:33.056 [2024-07-20 16:03:07.734000] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.056 [2024-07-20 16:03:07.734009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:33.056 [2024-07-20 16:03:07.734019] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:33.056 [2024-07-20 16:03:07.734027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.056 [2024-07-20 16:03:07.734036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:33.057 [2024-07-20 16:03:07.734045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:33.057 [2024-07-20 16:03:07.734054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.057 [2024-07-20 16:03:07.734062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:33.057 [2024-07-20 16:03:07.734072] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:33.057 [2024-07-20 16:03:07.734080] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.057 [2024-07-20 16:03:07.734089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:33.057 [2024-07-20 16:03:07.734107] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:33.057 [2024-07-20 16:03:07.734119] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.057 [2024-07-20 16:03:07.734128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:33.057 [2024-07-20 16:03:07.734137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:33.057 [2024-07-20 16:03:07.734146] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.057 [2024-07-20 16:03:07.734155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:33.057 [2024-07-20 16:03:07.734164] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:33.057 [2024-07-20 16:03:07.734172] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:33.057 [2024-07-20 16:03:07.734181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:33.057 [2024-07-20 16:03:07.734190] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:33.057 [2024-07-20 16:03:07.734198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:33.057 [2024-07-20 16:03:07.734207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:33.057 [2024-07-20 16:03:07.734216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:33.057 [2024-07-20 16:03:07.734225] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.057 [2024-07-20 16:03:07.734234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:33.057 [2024-07-20 16:03:07.734244] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:33.057 [2024-07-20 16:03:07.734253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.057 [2024-07-20 16:03:07.734263] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:33.057 [2024-07-20 16:03:07.734273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:33.057 [2024-07-20 16:03:07.734293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:33.057 [2024-07-20 16:03:07.734302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.057 [2024-07-20 16:03:07.734312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:33.057 [2024-07-20 16:03:07.734321] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:33.057 [2024-07-20 16:03:07.734330] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:33.057 [2024-07-20 16:03:07.734339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:33.057 [2024-07-20 16:03:07.734348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:33.057 [2024-07-20 16:03:07.734371] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:33.057 [2024-07-20 16:03:07.734381] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:33.057 [2024-07-20 16:03:07.734393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:33.057 [2024-07-20 16:03:07.734411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:33.057 [2024-07-20 16:03:07.734422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:33.057 [2024-07-20 16:03:07.734432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:33.057 [2024-07-20 16:03:07.734443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:33.057 [2024-07-20 16:03:07.734456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:33.057 [2024-07-20 16:03:07.734466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:33.057 [2024-07-20 16:03:07.734476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:33.057 [2024-07-20 16:03:07.734487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:33.057 [2024-07-20 16:03:07.734497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:33.057 [2024-07-20 16:03:07.734507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:33.057 [2024-07-20 16:03:07.734516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:33.057 [2024-07-20 16:03:07.734526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:33.057 [2024-07-20 16:03:07.734536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:33.057 [2024-07-20 16:03:07.734546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:33.057 [2024-07-20 16:03:07.734556] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:33.057 [2024-07-20 16:03:07.734566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:33.057 [2024-07-20 16:03:07.734584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:33.057 [2024-07-20 16:03:07.734595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:33.057 [2024-07-20 16:03:07.734605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:33.057 [2024-07-20 16:03:07.734615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:33.057 [2024-07-20 16:03:07.734636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.057 [2024-07-20 16:03:07.734653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:33.057 [2024-07-20 16:03:07.734662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:27:33.057 [2024-07-20 16:03:07.734672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.057 [2024-07-20 16:03:07.751322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.057 [2024-07-20 16:03:07.751375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:33.057 [2024-07-20 16:03:07.751394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.626 ms 00:27:33.057 [2024-07-20 16:03:07.751405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.057 [2024-07-20 16:03:07.751482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.057 [2024-07-20 16:03:07.751493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:33.057 [2024-07-20 16:03:07.751503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:33.057 [2024-07-20 16:03:07.751513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.057 [2024-07-20 16:03:07.762144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.057 [2024-07-20 16:03:07.762182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:33.057 [2024-07-20 16:03:07.762197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.594 ms 00:27:33.057 [2024-07-20 16:03:07.762208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.057 [2024-07-20 16:03:07.762245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.057 [2024-07-20 16:03:07.762257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:33.057 [2024-07-20 16:03:07.762277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:33.057 [2024-07-20 16:03:07.762289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.057 [2024-07-20 16:03:07.762413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.057 [2024-07-20 16:03:07.762428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:33.057 [2024-07-20 16:03:07.762439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:27:33.057 [2024-07-20 16:03:07.762450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.057 [2024-07-20 16:03:07.762570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.057 [2024-07-20 16:03:07.762589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:33.057 [2024-07-20 16:03:07.762600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:27:33.057 [2024-07-20 16:03:07.762611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.057 [2024-07-20 16:03:07.768633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.057 [2024-07-20 16:03:07.768665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:33.057 [2024-07-20 16:03:07.768683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.009 ms 00:27:33.057 [2024-07-20 16:03:07.768693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.057 [2024-07-20 16:03:07.768808] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:33.057 [2024-07-20 16:03:07.768822] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:33.057 [2024-07-20 16:03:07.768833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.057 [2024-07-20 16:03:07.768852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:33.057 [2024-07-20 16:03:07.768870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:33.057 [2024-07-20 16:03:07.768878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.057 [2024-07-20 16:03:07.778703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.057 [2024-07-20 16:03:07.778730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:33.058 [2024-07-20 16:03:07.778741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.824 ms 00:27:33.058 [2024-07-20 16:03:07.778750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.058 [2024-07-20 16:03:07.778844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.058 [2024-07-20 16:03:07.778854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:33.058 [2024-07-20 16:03:07.778864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:33.058 [2024-07-20 16:03:07.778883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.058 [2024-07-20 16:03:07.778927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.058 [2024-07-20 16:03:07.778941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:33.058 [2024-07-20 16:03:07.778954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:27:33.058 [2024-07-20 16:03:07.778962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.058 [2024-07-20 16:03:07.779252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.058 [2024-07-20 16:03:07.779264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:33.058 [2024-07-20 16:03:07.779274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:27:33.058 [2024-07-20 16:03:07.779283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.058 [2024-07-20 16:03:07.779299] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:33.058 [2024-07-20 16:03:07.779310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.058 [2024-07-20 16:03:07.779330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:33.058 [2024-07-20 16:03:07.779346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:33.058 [2024-07-20 16:03:07.779375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.058 [2024-07-20 16:03:07.786463] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:33.058 [2024-07-20 16:03:07.786666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.058 [2024-07-20 16:03:07.786683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:33.058 [2024-07-20 16:03:07.786695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.267 ms 00:27:33.058 [2024-07-20 16:03:07.786707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.058 [2024-07-20 16:03:07.788922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.058 [2024-07-20 16:03:07.788950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:33.058 [2024-07-20 16:03:07.788961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.191 ms 00:27:33.058 [2024-07-20 16:03:07.788971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.058 [2024-07-20 16:03:07.789023] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:27:33.058 [2024-07-20 16:03:07.789602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.058 [2024-07-20 16:03:07.789617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:33.058 [2024-07-20 16:03:07.789631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:27:33.058 [2024-07-20 16:03:07.789641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.058 [2024-07-20 16:03:07.789681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.058 [2024-07-20 16:03:07.789699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:33.058 [2024-07-20 16:03:07.789710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:33.058 [2024-07-20 16:03:07.789719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.058 [2024-07-20 16:03:07.789752] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:33.058 [2024-07-20 16:03:07.789764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.058 [2024-07-20 16:03:07.789773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:33.058 [2024-07-20 16:03:07.789790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:33.058 [2024-07-20 16:03:07.789802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.058 [2024-07-20 16:03:07.793832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.058 [2024-07-20 16:03:07.793873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:33.058 [2024-07-20 16:03:07.793893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.018 ms 00:27:33.058 [2024-07-20 16:03:07.793903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.058 [2024-07-20 16:03:07.793965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.058 [2024-07-20 16:03:07.793977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:33.058 [2024-07-20 16:03:07.793988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:33.058 [2024-07-20 16:03:07.793997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.058 [2024-07-20 16:03:07.799849] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 68.604 ms, result 0 00:28:10.887  Copying: 28/1024 [MB] (28 MBps) Copying: 55/1024 [MB] (26 MBps) Copying: 82/1024 [MB] (26 MBps) Copying: 108/1024 [MB] (26 MBps) Copying: 136/1024 [MB] (27 MBps) Copying: 162/1024 [MB] (26 MBps) Copying: 190/1024 [MB] (27 MBps) Copying: 217/1024 [MB] (27 MBps) Copying: 244/1024 [MB] (27 MBps) Copying: 271/1024 [MB] (27 MBps) Copying: 298/1024 [MB] (27 MBps) Copying: 326/1024 [MB] (27 MBps) Copying: 354/1024 [MB] (27 MBps) Copying: 382/1024 [MB] (28 MBps) Copying: 409/1024 [MB] (27 MBps) Copying: 437/1024 [MB] (27 MBps) Copying: 464/1024 [MB] (26 MBps) Copying: 491/1024 [MB] (27 MBps) Copying: 518/1024 [MB] (27 MBps) Copying: 545/1024 [MB] (27 MBps) Copying: 572/1024 [MB] (26 MBps) Copying: 600/1024 [MB] (27 MBps) Copying: 628/1024 [MB] (28 MBps) Copying: 656/1024 [MB] (27 MBps) Copying: 684/1024 [MB] (27 MBps) Copying: 713/1024 [MB] (28 MBps) Copying: 741/1024 [MB] (28 MBps) Copying: 769/1024 [MB] (28 MBps) Copying: 797/1024 [MB] (27 MBps) Copying: 824/1024 [MB] (27 MBps) Copying: 852/1024 [MB] (27 MBps) Copying: 879/1024 [MB] (27 MBps) Copying: 908/1024 [MB] (28 MBps) Copying: 936/1024 [MB] (28 MBps) Copying: 963/1024 [MB] (27 MBps) Copying: 990/1024 [MB] (26 MBps) Copying: 1016/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-20 16:03:45.663407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.887 [2024-07-20 16:03:45.663467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:10.887 [2024-07-20 16:03:45.663483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:10.887 [2024-07-20 16:03:45.663494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.887 [2024-07-20 16:03:45.663518] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:10.887 [2024-07-20 16:03:45.664205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.887 [2024-07-20 16:03:45.664222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:10.887 [2024-07-20 16:03:45.664233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:28:10.887 [2024-07-20 16:03:45.664247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.887 [2024-07-20 16:03:45.664454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.887 [2024-07-20 16:03:45.664467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:10.887 [2024-07-20 16:03:45.664479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:28:10.887 [2024-07-20 16:03:45.664488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.887 [2024-07-20 16:03:45.664519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.887 [2024-07-20 16:03:45.664540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:10.887 [2024-07-20 16:03:45.664551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:10.887 [2024-07-20 16:03:45.664560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.887 [2024-07-20 16:03:45.664614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.887 [2024-07-20 16:03:45.664625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:10.887 [2024-07-20 16:03:45.664635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:10.887 [2024-07-20 16:03:45.664644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.887 [2024-07-20 16:03:45.664660] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:10.887 [2024-07-20 16:03:45.664674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:28:10.887 [2024-07-20 16:03:45.664686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.664697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.664708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.664718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.664728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.664738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.664749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.664760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.664770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.664781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.664792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.664996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:10.887 [2024-07-20 16:03:45.665215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:10.888 [2024-07-20 16:03:45.665937] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:10.888 [2024-07-20 16:03:45.665958] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68669144-5c5a-4b7c-bb83-844974b2e831 00:28:10.888 [2024-07-20 16:03:45.665970] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:28:10.888 [2024-07-20 16:03:45.665980] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4384 00:28:10.888 [2024-07-20 16:03:45.665999] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4352 00:28:10.888 [2024-07-20 16:03:45.666191] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:28:10.888 [2024-07-20 16:03:45.666201] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:10.888 [2024-07-20 16:03:45.666211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:10.888 [2024-07-20 16:03:45.666220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:10.888 [2024-07-20 16:03:45.666229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:10.888 [2024-07-20 16:03:45.666238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:10.888 [2024-07-20 16:03:45.666248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.888 [2024-07-20 16:03:45.666259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:10.888 [2024-07-20 16:03:45.666268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.591 ms 00:28:10.888 [2024-07-20 16:03:45.666278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.888 [2024-07-20 16:03:45.668223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.888 [2024-07-20 16:03:45.668252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:10.888 [2024-07-20 16:03:45.668263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.927 ms 00:28:10.888 [2024-07-20 16:03:45.668273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.888 [2024-07-20 16:03:45.668402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.888 [2024-07-20 16:03:45.668415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:10.888 [2024-07-20 16:03:45.668426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:28:10.888 [2024-07-20 16:03:45.668436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.888 [2024-07-20 16:03:45.675224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.888 [2024-07-20 16:03:45.675255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:10.888 [2024-07-20 16:03:45.675268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.888 [2024-07-20 16:03:45.675278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.888 [2024-07-20 16:03:45.675334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.888 [2024-07-20 16:03:45.675344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:10.888 [2024-07-20 16:03:45.675365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.888 [2024-07-20 16:03:45.675376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.888 [2024-07-20 16:03:45.675437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.889 [2024-07-20 16:03:45.675456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:10.889 [2024-07-20 16:03:45.675471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.889 [2024-07-20 16:03:45.675481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.889 [2024-07-20 16:03:45.675498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.889 [2024-07-20 16:03:45.675509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:10.889 [2024-07-20 16:03:45.675519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.889 [2024-07-20 16:03:45.675528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.147 [2024-07-20 16:03:45.690738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.147 [2024-07-20 16:03:45.690783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:11.147 [2024-07-20 16:03:45.690796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.147 [2024-07-20 16:03:45.690807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.147 [2024-07-20 16:03:45.700412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.147 [2024-07-20 16:03:45.700450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:11.147 [2024-07-20 16:03:45.700463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.147 [2024-07-20 16:03:45.700473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.147 [2024-07-20 16:03:45.700531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.147 [2024-07-20 16:03:45.700542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:11.147 [2024-07-20 16:03:45.700556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.147 [2024-07-20 16:03:45.700566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.147 [2024-07-20 16:03:45.700590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.147 [2024-07-20 16:03:45.700600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:11.147 [2024-07-20 16:03:45.700611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.147 [2024-07-20 16:03:45.700620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.147 [2024-07-20 16:03:45.700676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.147 [2024-07-20 16:03:45.700689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:11.147 [2024-07-20 16:03:45.700698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.147 [2024-07-20 16:03:45.700718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.147 [2024-07-20 16:03:45.700744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.147 [2024-07-20 16:03:45.700763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:11.147 [2024-07-20 16:03:45.700773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.147 [2024-07-20 16:03:45.700783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.147 [2024-07-20 16:03:45.700819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.147 [2024-07-20 16:03:45.700841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:11.147 [2024-07-20 16:03:45.700853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.147 [2024-07-20 16:03:45.700866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.148 [2024-07-20 16:03:45.700912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.148 [2024-07-20 16:03:45.700924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:11.148 [2024-07-20 16:03:45.700934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.148 [2024-07-20 16:03:45.700943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.148 [2024-07-20 16:03:45.701077] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 37.697 ms, result 0 00:28:11.148 00:28:11.148 00:28:11.426 16:03:45 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:12.798 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:12.798 16:03:47 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:12.798 16:03:47 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:28:12.798 16:03:47 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:13.056 Process with pid 94426 is not found 00:28:13.056 Remove shared memory files 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94426 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 94426 ']' 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 94426 00:28:13.056 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (94426) - No such process 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # echo 'Process with pid 94426 is not found' 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_68669144-5c5a-4b7c-bb83-844974b2e831_band_md /dev/hugepages/ftl_68669144-5c5a-4b7c-bb83-844974b2e831_l2p_l1 /dev/hugepages/ftl_68669144-5c5a-4b7c-bb83-844974b2e831_l2p_l2 /dev/hugepages/ftl_68669144-5c5a-4b7c-bb83-844974b2e831_l2p_l2_ctx /dev/hugepages/ftl_68669144-5c5a-4b7c-bb83-844974b2e831_nvc_md /dev/hugepages/ftl_68669144-5c5a-4b7c-bb83-844974b2e831_p2l_pool /dev/hugepages/ftl_68669144-5c5a-4b7c-bb83-844974b2e831_sb /dev/hugepages/ftl_68669144-5c5a-4b7c-bb83-844974b2e831_sb_shm /dev/hugepages/ftl_68669144-5c5a-4b7c-bb83-844974b2e831_trim_bitmap /dev/hugepages/ftl_68669144-5c5a-4b7c-bb83-844974b2e831_trim_log /dev/hugepages/ftl_68669144-5c5a-4b7c-bb83-844974b2e831_trim_md /dev/hugepages/ftl_68669144-5c5a-4b7c-bb83-844974b2e831_vmap 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:28:13.056 00:28:13.056 real 2m59.194s 00:28:13.056 user 2m48.428s 00:28:13.056 sys 0m11.851s 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:13.056 16:03:47 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:13.056 ************************************ 00:28:13.056 END TEST ftl_restore_fast 00:28:13.056 ************************************ 00:28:13.056 16:03:47 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:13.056 16:03:47 ftl -- ftl/ftl.sh@14 -- # killprocess 87506 00:28:13.056 16:03:47 ftl -- common/autotest_common.sh@946 -- # '[' -z 87506 ']' 00:28:13.056 Process with pid 87506 is not found 00:28:13.056 16:03:47 ftl -- common/autotest_common.sh@950 -- # kill -0 87506 00:28:13.056 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (87506) - No such process 00:28:13.056 16:03:47 ftl -- common/autotest_common.sh@973 -- # echo 'Process with pid 87506 is not found' 00:28:13.056 16:03:47 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:13.056 16:03:47 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=96296 00:28:13.056 16:03:47 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:13.056 16:03:47 ftl -- ftl/ftl.sh@20 -- # waitforlisten 96296 00:28:13.056 16:03:47 ftl -- common/autotest_common.sh@827 -- # '[' -z 96296 ']' 00:28:13.056 16:03:47 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.056 16:03:47 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:13.056 16:03:47 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.056 16:03:47 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:13.056 16:03:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:13.314 [2024-07-20 16:03:47.905061] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:13.314 [2024-07-20 16:03:47.905398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96296 ] 00:28:13.314 [2024-07-20 16:03:48.056825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.314 [2024-07-20 16:03:48.102841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.249 16:03:48 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:14.249 16:03:48 ftl -- common/autotest_common.sh@860 -- # return 0 00:28:14.249 16:03:48 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:14.249 nvme0n1 00:28:14.249 16:03:49 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:14.249 16:03:49 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:14.249 16:03:49 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:14.507 16:03:49 ftl -- ftl/common.sh@28 -- # stores=e358f2f2-6fdf-4a86-9e1d-5485e896949b 00:28:14.507 16:03:49 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:14.507 16:03:49 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e358f2f2-6fdf-4a86-9e1d-5485e896949b 00:28:14.766 16:03:49 ftl -- ftl/ftl.sh@23 -- # killprocess 96296 00:28:14.766 16:03:49 ftl -- common/autotest_common.sh@946 -- # '[' -z 96296 ']' 00:28:14.766 16:03:49 ftl -- common/autotest_common.sh@950 -- # kill -0 96296 00:28:14.766 16:03:49 ftl -- common/autotest_common.sh@951 -- # uname 00:28:14.766 16:03:49 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:14.766 16:03:49 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96296 00:28:14.766 killing process with pid 96296 00:28:14.766 16:03:49 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:14.766 16:03:49 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:14.766 16:03:49 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96296' 00:28:14.766 16:03:49 ftl -- common/autotest_common.sh@965 -- # kill 96296 00:28:14.766 16:03:49 ftl -- common/autotest_common.sh@970 -- # wait 96296 00:28:15.333 16:03:49 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:15.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:15.592 Waiting for block devices as requested 00:28:15.592 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:15.850 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:15.850 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:15.850 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:21.160 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:21.160 Remove shared memory files 00:28:21.160 16:03:55 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:21.160 16:03:55 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:21.160 16:03:55 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:21.160 16:03:55 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:21.160 16:03:55 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:21.160 16:03:55 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:21.160 16:03:55 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:21.160 ************************************ 00:28:21.160 END TEST ftl 00:28:21.160 ************************************ 00:28:21.160 00:28:21.160 real 12m48.698s 00:28:21.160 user 14m37.263s 00:28:21.160 sys 1m31.184s 00:28:21.160 16:03:55 ftl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:21.160 16:03:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:21.160 16:03:55 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:21.160 16:03:55 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:21.160 16:03:55 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:21.160 16:03:55 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:21.160 16:03:55 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:21.160 16:03:55 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:21.160 16:03:55 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:21.160 16:03:55 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:21.160 16:03:55 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:21.160 16:03:55 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:21.160 16:03:55 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:21.160 16:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:21.160 16:03:55 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:21.160 16:03:55 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:28:21.160 16:03:55 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:28:21.160 16:03:55 -- common/autotest_common.sh@10 -- # set +x 00:28:23.064 INFO: APP EXITING 00:28:23.064 INFO: killing all VMs 00:28:23.064 INFO: killing vhost app 00:28:23.064 INFO: EXIT DONE 00:28:23.632 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:23.891 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:23.891 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:23.891 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:23.891 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:24.459 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:25.029 Cleaning 00:28:25.029 Removing: /var/run/dpdk/spdk0/config 00:28:25.029 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:25.029 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:25.029 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:25.029 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:25.029 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:25.029 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:25.029 Removing: /var/run/dpdk/spdk0 00:28:25.029 Removing: /var/run/dpdk/spdk_pid73709 00:28:25.029 Removing: /var/run/dpdk/spdk_pid73865 00:28:25.029 Removing: /var/run/dpdk/spdk_pid74064 00:28:25.029 Removing: /var/run/dpdk/spdk_pid74146 00:28:25.029 Removing: /var/run/dpdk/spdk_pid74175 00:28:25.029 Removing: /var/run/dpdk/spdk_pid74286 00:28:25.029 Removing: /var/run/dpdk/spdk_pid74304 00:28:25.029 Removing: /var/run/dpdk/spdk_pid74457 00:28:25.029 Removing: /var/run/dpdk/spdk_pid74528 00:28:25.029 Removing: /var/run/dpdk/spdk_pid74600 00:28:25.029 Removing: /var/run/dpdk/spdk_pid74686 00:28:25.029 Removing: /var/run/dpdk/spdk_pid74761 00:28:25.029 Removing: /var/run/dpdk/spdk_pid74800 00:28:25.029 Removing: /var/run/dpdk/spdk_pid74837 00:28:25.029 Removing: /var/run/dpdk/spdk_pid74898 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75011 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75421 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75474 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75521 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75537 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75600 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75616 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75685 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75696 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75749 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75761 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75809 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75821 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75946 00:28:25.029 Removing: /var/run/dpdk/spdk_pid75977 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76058 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76106 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76137 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76193 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76234 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76264 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76305 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76335 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76376 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76406 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76442 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76477 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76513 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76548 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76584 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76619 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76655 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76685 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76726 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76756 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76800 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76833 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76874 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76905 00:28:25.029 Removing: /var/run/dpdk/spdk_pid76976 00:28:25.029 Removing: /var/run/dpdk/spdk_pid77070 00:28:25.029 Removing: /var/run/dpdk/spdk_pid77215 00:28:25.029 Removing: /var/run/dpdk/spdk_pid77283 00:28:25.029 Removing: /var/run/dpdk/spdk_pid77314 00:28:25.029 Removing: /var/run/dpdk/spdk_pid77726 00:28:25.289 Removing: /var/run/dpdk/spdk_pid77813 00:28:25.289 Removing: /var/run/dpdk/spdk_pid77911 00:28:25.289 Removing: /var/run/dpdk/spdk_pid77953 00:28:25.289 Removing: /var/run/dpdk/spdk_pid77973 00:28:25.289 Removing: /var/run/dpdk/spdk_pid78049 00:28:25.289 Removing: /var/run/dpdk/spdk_pid78661 00:28:25.289 Removing: /var/run/dpdk/spdk_pid78692 00:28:25.289 Removing: /var/run/dpdk/spdk_pid79138 00:28:25.289 Removing: /var/run/dpdk/spdk_pid79230 00:28:25.289 Removing: /var/run/dpdk/spdk_pid79329 00:28:25.289 Removing: /var/run/dpdk/spdk_pid79371 00:28:25.289 Removing: /var/run/dpdk/spdk_pid79391 00:28:25.289 Removing: /var/run/dpdk/spdk_pid79421 00:28:25.289 Removing: /var/run/dpdk/spdk_pid81257 00:28:25.289 Removing: /var/run/dpdk/spdk_pid81379 00:28:25.289 Removing: /var/run/dpdk/spdk_pid81383 00:28:25.289 Removing: /var/run/dpdk/spdk_pid81400 00:28:25.289 Removing: /var/run/dpdk/spdk_pid81449 00:28:25.289 Removing: /var/run/dpdk/spdk_pid81453 00:28:25.289 Removing: /var/run/dpdk/spdk_pid81465 00:28:25.289 Removing: /var/run/dpdk/spdk_pid81510 00:28:25.289 Removing: /var/run/dpdk/spdk_pid81514 00:28:25.289 Removing: /var/run/dpdk/spdk_pid81526 00:28:25.289 Removing: /var/run/dpdk/spdk_pid81571 00:28:25.289 Removing: /var/run/dpdk/spdk_pid81575 00:28:25.289 Removing: /var/run/dpdk/spdk_pid81587 00:28:25.289 Removing: /var/run/dpdk/spdk_pid82946 00:28:25.289 Removing: /var/run/dpdk/spdk_pid83024 00:28:25.289 Removing: /var/run/dpdk/spdk_pid83912 00:28:25.289 Removing: /var/run/dpdk/spdk_pid84267 00:28:25.289 Removing: /var/run/dpdk/spdk_pid84332 00:28:25.289 Removing: /var/run/dpdk/spdk_pid84388 00:28:25.289 Removing: /var/run/dpdk/spdk_pid84448 00:28:25.289 Removing: /var/run/dpdk/spdk_pid84538 00:28:25.289 Removing: /var/run/dpdk/spdk_pid84601 00:28:25.289 Removing: /var/run/dpdk/spdk_pid84730 00:28:25.289 Removing: /var/run/dpdk/spdk_pid84995 00:28:25.289 Removing: /var/run/dpdk/spdk_pid85026 00:28:25.289 Removing: /var/run/dpdk/spdk_pid85443 00:28:25.289 Removing: /var/run/dpdk/spdk_pid85616 00:28:25.289 Removing: /var/run/dpdk/spdk_pid85709 00:28:25.289 Removing: /var/run/dpdk/spdk_pid85803 00:28:25.289 Removing: /var/run/dpdk/spdk_pid85841 00:28:25.289 Removing: /var/run/dpdk/spdk_pid85866 00:28:25.289 Removing: /var/run/dpdk/spdk_pid86152 00:28:25.289 Removing: /var/run/dpdk/spdk_pid86189 00:28:25.289 Removing: /var/run/dpdk/spdk_pid86237 00:28:25.289 Removing: /var/run/dpdk/spdk_pid86577 00:28:25.289 Removing: /var/run/dpdk/spdk_pid86721 00:28:25.289 Removing: /var/run/dpdk/spdk_pid87506 00:28:25.289 Removing: /var/run/dpdk/spdk_pid87608 00:28:25.289 Removing: /var/run/dpdk/spdk_pid87761 00:28:25.289 Removing: /var/run/dpdk/spdk_pid87847 00:28:25.289 Removing: /var/run/dpdk/spdk_pid88147 00:28:25.289 Removing: /var/run/dpdk/spdk_pid88376 00:28:25.289 Removing: /var/run/dpdk/spdk_pid88716 00:28:25.289 Removing: /var/run/dpdk/spdk_pid88892 00:28:25.289 Removing: /var/run/dpdk/spdk_pid89015 00:28:25.289 Removing: /var/run/dpdk/spdk_pid89057 00:28:25.289 Removing: /var/run/dpdk/spdk_pid89180 00:28:25.550 Removing: /var/run/dpdk/spdk_pid89194 00:28:25.550 Removing: /var/run/dpdk/spdk_pid89230 00:28:25.550 Removing: /var/run/dpdk/spdk_pid89414 00:28:25.550 Removing: /var/run/dpdk/spdk_pid89611 00:28:25.550 Removing: /var/run/dpdk/spdk_pid90038 00:28:25.550 Removing: /var/run/dpdk/spdk_pid90464 00:28:25.550 Removing: /var/run/dpdk/spdk_pid90898 00:28:25.550 Removing: /var/run/dpdk/spdk_pid91384 00:28:25.550 Removing: /var/run/dpdk/spdk_pid91525 00:28:25.550 Removing: /var/run/dpdk/spdk_pid91601 00:28:25.550 Removing: /var/run/dpdk/spdk_pid92185 00:28:25.550 Removing: /var/run/dpdk/spdk_pid92239 00:28:25.550 Removing: /var/run/dpdk/spdk_pid92698 00:28:25.550 Removing: /var/run/dpdk/spdk_pid93046 00:28:25.550 Removing: /var/run/dpdk/spdk_pid93530 00:28:25.550 Removing: /var/run/dpdk/spdk_pid93641 00:28:25.550 Removing: /var/run/dpdk/spdk_pid93677 00:28:25.550 Removing: /var/run/dpdk/spdk_pid93730 00:28:25.550 Removing: /var/run/dpdk/spdk_pid93769 00:28:25.550 Removing: /var/run/dpdk/spdk_pid93822 00:28:25.550 Removing: /var/run/dpdk/spdk_pid93990 00:28:25.550 Removing: /var/run/dpdk/spdk_pid94072 00:28:25.550 Removing: /var/run/dpdk/spdk_pid94139 00:28:25.550 Removing: /var/run/dpdk/spdk_pid94189 00:28:25.550 Removing: /var/run/dpdk/spdk_pid94227 00:28:25.550 Removing: /var/run/dpdk/spdk_pid94278 00:28:25.550 Removing: /var/run/dpdk/spdk_pid94426 00:28:25.550 Removing: /var/run/dpdk/spdk_pid94624 00:28:25.550 Removing: /var/run/dpdk/spdk_pid95042 00:28:25.550 Removing: /var/run/dpdk/spdk_pid95446 00:28:25.550 Removing: /var/run/dpdk/spdk_pid95866 00:28:25.550 Removing: /var/run/dpdk/spdk_pid96296 00:28:25.550 Clean 00:28:25.550 16:04:00 -- common/autotest_common.sh@1447 -- # return 0 00:28:25.550 16:04:00 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:25.550 16:04:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.550 16:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:25.809 16:04:00 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:25.809 16:04:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:25.809 16:04:00 -- common/autotest_common.sh@10 -- # set +x 00:28:25.809 16:04:00 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:25.809 16:04:00 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:25.809 16:04:00 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:25.809 16:04:00 -- spdk/autotest.sh@391 -- # hash lcov 00:28:25.809 16:04:00 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:25.809 16:04:00 -- spdk/autotest.sh@393 -- # hostname 00:28:25.809 16:04:00 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:25.809 geninfo: WARNING: invalid characters removed from testname! 00:28:52.349 16:04:23 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:52.350 16:04:26 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:54.253 16:04:28 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:56.156 16:04:30 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:58.060 16:04:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:59.964 16:04:34 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:02.499 16:04:36 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:02.499 16:04:36 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:02.499 16:04:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:02.499 16:04:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.499 16:04:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.499 16:04:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.499 16:04:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.499 16:04:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.499 16:04:36 -- paths/export.sh@5 -- $ export PATH 00:29:02.499 16:04:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.499 16:04:36 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:02.499 16:04:36 -- common/autobuild_common.sh@437 -- $ date +%s 00:29:02.499 16:04:36 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721491476.XXXXXX 00:29:02.499 16:04:36 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721491476.jX4UHz 00:29:02.499 16:04:36 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:29:02.499 16:04:36 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:29:02.499 16:04:36 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:29:02.499 16:04:36 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:29:02.499 16:04:36 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:02.499 16:04:36 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:02.499 16:04:36 -- common/autobuild_common.sh@453 -- $ get_config_params 00:29:02.499 16:04:36 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:29:02.499 16:04:36 -- common/autotest_common.sh@10 -- $ set +x 00:29:02.499 16:04:36 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:29:02.499 16:04:36 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:29:02.499 16:04:36 -- pm/common@17 -- $ local monitor 00:29:02.499 16:04:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:02.499 16:04:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:02.499 16:04:36 -- pm/common@21 -- $ date +%s 00:29:02.499 16:04:36 -- pm/common@25 -- $ sleep 1 00:29:02.499 16:04:36 -- pm/common@21 -- $ date +%s 00:29:02.499 16:04:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721491476 00:29:02.499 16:04:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721491476 00:29:02.499 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721491476_collect-vmstat.pm.log 00:29:02.499 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721491476_collect-cpu-load.pm.log 00:29:03.436 16:04:37 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:29:03.436 16:04:37 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:03.436 16:04:37 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:03.436 16:04:37 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:03.436 16:04:37 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:29:03.436 16:04:37 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:03.436 16:04:37 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:03.436 16:04:37 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:03.436 16:04:37 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:03.436 16:04:37 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:03.436 16:04:37 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:03.436 16:04:37 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:03.436 16:04:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:03.436 16:04:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:03.436 16:04:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:03.436 16:04:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:03.436 16:04:37 -- pm/common@44 -- $ pid=98048 00:29:03.436 16:04:37 -- pm/common@50 -- $ kill -TERM 98048 00:29:03.436 16:04:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:03.436 16:04:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:03.436 16:04:37 -- pm/common@44 -- $ pid=98050 00:29:03.436 16:04:37 -- pm/common@50 -- $ kill -TERM 98050 00:29:03.436 + [[ -n 5891 ]] 00:29:03.436 + sudo kill 5891 00:29:03.446 [Pipeline] } 00:29:03.466 [Pipeline] // timeout 00:29:03.472 [Pipeline] } 00:29:03.490 [Pipeline] // stage 00:29:03.496 [Pipeline] } 00:29:03.514 [Pipeline] // catchError 00:29:03.528 [Pipeline] stage 00:29:03.530 [Pipeline] { (Stop VM) 00:29:03.545 [Pipeline] sh 00:29:03.829 + vagrant halt 00:29:07.169 ==> default: Halting domain... 00:29:13.742 [Pipeline] sh 00:29:14.023 + vagrant destroy -f 00:29:17.308 ==> default: Removing domain... 00:29:17.320 [Pipeline] sh 00:29:17.601 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:17.610 [Pipeline] } 00:29:17.628 [Pipeline] // stage 00:29:17.634 [Pipeline] } 00:29:17.650 [Pipeline] // dir 00:29:17.656 [Pipeline] } 00:29:17.672 [Pipeline] // wrap 00:29:17.679 [Pipeline] } 00:29:17.692 [Pipeline] // catchError 00:29:17.699 [Pipeline] stage 00:29:17.700 [Pipeline] { (Epilogue) 00:29:17.711 [Pipeline] sh 00:29:17.990 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:23.365 [Pipeline] catchError 00:29:23.366 [Pipeline] { 00:29:23.381 [Pipeline] sh 00:29:23.665 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:23.924 Artifacts sizes are good 00:29:23.933 [Pipeline] } 00:29:23.951 [Pipeline] // catchError 00:29:23.962 [Pipeline] archiveArtifacts 00:29:23.969 Archiving artifacts 00:29:24.081 [Pipeline] cleanWs 00:29:24.092 [WS-CLEANUP] Deleting project workspace... 00:29:24.092 [WS-CLEANUP] Deferred wipeout is used... 00:29:24.097 [WS-CLEANUP] done 00:29:24.099 [Pipeline] } 00:29:24.115 [Pipeline] // stage 00:29:24.121 [Pipeline] } 00:29:24.136 [Pipeline] // node 00:29:24.142 [Pipeline] End of Pipeline 00:29:24.175 Finished: SUCCESS